In this post, I want to explore some important synergies between architectural thinking and risk management. The first point is that if we want to have an enterprise-wide understanding of risk, then it helps to have an enterprise-wide view of how …
Following the data is all very well, but how should we decide which data to follow? Nate Silver argues that the total data are more important than the marginal data. There is more virus transmission in restaurants than in aeroplanes.The average Am…
Expressing real life situations as mathematical models drive decisions on how to respond appropriately. Assumptions are not facts but do have a probability of occurring. Confidence in a model is an important factor of both the likelihood of the assumptions being correct and the quality and quantity of the data … Continue reading
Those of you using BiZZdesign’s Enterprise Studio know we are lucky enough to have a collection of elements for security modelling. This view shows how we can map those to standard ArchiMate elements. Continue reading →
The post Risk And Security…
Architecture concerns are often misunderstood, partly because of the use of the word “concern”. This blog aims to provide some clarity around concerns and their use. Continue reading →
The post Architecture Concerns appeared first on The EA Sandb…
Creating a Business Continuity Plan (BCP) requires thought and planning. This blog explores what a BCP is, a high level approach to defining a BCP and how it differs from a Disaster Recover Plan (DRP). Continue reading →
The post Business Continu…
We recently published our Risk And Compliance Tech Tide report outlining 14 core technologies to track in 2018. One of the challenging parts of this research is setting the right scope. We found risk and compliance technology everywhere, covering every…
@NilsPratley blames delusion in the boardroom (on a grand scale, he says) for Carillion’s collapse. “In the end, it comes down to judgments made in the boardroom.”A letter to the editor of the Financial Times agrees.”This situation has been caused, in …
In one of my earlier posts about technical debt, I differentiated between intentional debt (that taken on deliberately and purposefully) and accidental debt (that which just accrues over time without rhyme or reason or record). Dealing with (in the sense of evaluating, tracking, and resolving it) technical debt is obviously a consideration for someone in […]
Some things seem so logically inconsistent that you just have to check them out. Such was the title of a post on LinkedIn that I saw the other day: “Innovation In Fear-Based Cultures? Or, why hire lions to be dogs?”. In it, Michael Graber noted that “…top-down organizations have the most trouble innovating.”: In particular, […]
When Complex Event Processing (CEP) emerged around ten years ago, one of the early applications was real-time risk management. In the financial sector, there was growing recognition for the need for real-time visibility – continuous calibration of positions – in order to keep pace with the emerging importance of algorithmic trading. This is now relatively well-established in banking and trading sectors; Chemitiganti argues that the insurance industry now faces similar requirements.
In 2008, Chris Martins, then Marketing Director for CEP firm Apama, suggested considering CEP as a prospective “dog whisperer” that can help manage the risk of the technology “dog” biting its master.
But “dog bites master” works in both directions. In the case of Eliot Spitzer, the dog that bit its master was the anti money-laundering software that he had used against others.
And in the case of algorithmic trading, it seems we can no longer be sure who is master – whether black swan events are the inevitable and emergent result of excessive complexity, or whether hostile agents are engaged in a black swan breeding programme. One of the first CEP insiders to raise this concern was John Bates, first as CTO at Apama and subsequently with Software AG. (He now works for a subsidiary of SAP.)
|from Dark Pools by Scott Patterson|
And in 2015, Bates wrote that “high-speed trading algorithms are an alluring target for cyber thieves”.
So if technology is capable of both generating unexpected events and amplifying hostile attacks, are we being naive to imagine we use the same technology to protect ourselves?
Perhaps, but I believe there are some productive lines of development, as I’ve discussed previously on this blog and elsewhere.
1. Organizational intelligence – not relying either on human intelligence alone or on artificial intelligence alone, but looking for establishing sociotechnical systems that allow people and algorithms to collaborate effectively.
2. Algorithmic biodiversity – maintaining multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate “second opinions”.
Vamsi Chemitiganti, Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares.. (10 September 2016)
Theo Hildyard, Pillar #6 of Market Surveillance 2.0: Known and unknown threats (Trading Mesh, 2 April 2015)
Neil Johnson et al, Financial black swans driven by ultrafast machine ecology (arXiv:1202.1448 [physics.soc-ph], 7 Feb 2012)
Chris Martins, CEP and Real-Time Risk – “The Dog Whisperer” (Apama, 21 March 2008)
Scott Patterson, Dark Pools – The Rise of A. I. Trading Machines and the Looming Threat to Wall Street (Random House, 2013). See review by David Leinweber, Are Algorithmic Monsters Threatening The Global Financial System? (Forbes, 11 July 2012)
Richard Veryard, Building Organizational Intelligence (LeanPub, 2012)
The Shelf-Life of Algorithms (October 2016)
What does the World War II naval campaign known as the Battle of the Atlantic have to do with learning and innovation? Quite a lot, as it turns out. Early in the war, Britain found itself in a precarious position. While being an island nation provided defensive advantages, it also came with logistical challenges. Food, […]