The Unexpected Happens

When Complex Event Processing (CEP) emerged around ten years ago, one of the early applications was real-time risk management. In the financial sector, there was growing recognition for the need for real-time visibility – continuous calibration of positions – in order to keep pace with the emerging importance of algorithmic trading. This is now relatively well-established in banking and trading sectors; Chemitiganti argues that the insurance industry now faces similar requirements.

In 2008, Chris Martins, then Marketing Director for CEP firm Apama, suggested considering CEP as a prospective “dog whisperer” that can help manage the risk of the technology “dog” biting its master.

But “dog bites master” works in both directions. In the case of Eliot Spitzer, the dog that bit its master was the anti money-laundering software that he had used against others.

And in the case of algorithmic trading, it seems we can no longer be sure who is master – whether black swan events are the inevitable and emergent result of excessive complexity, or whether hostile agents are engaged in a black swan breeding programme.  One of the first CEP insiders to raise this concern was John Bates, first as CTO at Apama and subsequently with Software AG. (He now works for a subsidiary of SAP.)

from Dark Pools by Scott Patterson

And in 2015, Bates wrote that “high-speed trading algorithms are an alluring target for cyber thieves”.

So if technology is capable of both generating unexpected events and amplifying hostile attacks, are we being naive to imagine we use the same technology to protect ourselves?

Perhaps, but I believe there are some productive lines of development, as I’ve discussed previously on this blog and elsewhere.

1. Organizational intelligence – not relying either on human intelligence alone or on artificial intelligence alone, but looking for establishing sociotechnical systems that allow people and algorithms to collaborate effectively.

2. Algorithmic biodiversity – maintaining multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate “second opinions”.


John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)

John Borland, The Technology That Toppled Eliot Spitzer (MIT Technology Review, 19 March 2008) via Adam Shostack, Algorithms for the War on the Unexpected (19 March 2008)

Vamsi Chemitiganti, Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares.. (10 September 2016)

Theo Hildyard, Pillar #6 of Market Surveillance 2.0: Known and unknown threats (Trading Mesh, 2 April 2015)

Neil Johnson et al, Financial black swans driven by ultrafast machine ecology (arXiv:1202.1448 [physics.soc-ph], 7 Feb 2012)

Chris Martins, CEP and Real-Time Risk – “The Dog Whisperer” (Apama, 21 March 2008)

Scott Patterson, Dark Pools – The Rise of A. I. Trading Machines and the Looming Threat to Wall Street (Random House, 2013). See review by David Leinweber, Are Algorithmic Monsters Threatening The Global Financial System? (Forbes, 11 July 2012)

Richard Veryard, Building Organizational Intelligence (LeanPub, 2012)

Related Posts

The Shelf-Life of Algorithms (October 2016)

Uber Mathematics 2

Aside from the discussion of Uber as a two-sided platform, addressed in my post on Uber Mathematics (Nov 2016), there is also a discussion of Uber’s overall growth strategy and profitability. @izakaminska has been writing a series of critical articles on FT Alphaville.

Uber is wildly unprofitable, suggest that prices will rise once they’ve succeeded at monopolizing the industry: https://t.co/m3HB3q5YZV pic.twitter.com/taXcHfD2g5

— Justin Wolfers (@JustinWolfers) December 1, 2016

There are a few different issues that need to be teased apart here. Firstly, there is the fact that Uber is continually launching its service in more cities and countries. Nobody should expect the service in a new city to be instantly profitable. The total figures that Kaminska has obtained raise further questions – whether some cities are more profitable for Uber than others, whether there is a repeating pattern of investment returns as a city service moves from loss-making into profit. Like many companies in rapid growth phase, Uber has managed to convince its investors that they are funding growth into something that has good prospects of becoming profitable.

Profitability in Silicon Valley seems to be predicated on monopoly, as argued by Peter Thiel, leveraging network effects to establish barriers to entry. This is related to the concept of a retail destination – establishing the illusion that there is only one place to go. Kaminska quotes an opinion by Piccioni and Kantorovich, to the effect that it wouldn’t take much to set up a rival to Uber, but this opinion needs to be weighed against the fact that Uber has already seen off a number of competitors, including Sidecar. Sidecar was funded by Richard Branson, who asserted that he was not putting his money into a “winner-takes-all market”. It now looks as if he was mistaken, as Om Malik (writing in the New Yorker) respectfully points out.

But is Uber economically sustainable even as a monopoly? Kaminska has raised a number of  questions about the underlying business model, including the increasing need for capital investment which could erode margins further. Meanwhile, Uber will almost certainly leverage its cheapness and popularity with passengers to push for further deregulation. So the survival of this model may depend not only on a continual supply of innocent investors and innocent drivers, but also innocent politicians who fall for the deregulation agenda.


Philip Boxer, Managing over the Whole Governance Cycle (April 2006)

Izabella Kaminska, Why Uber’s capital costs will creep ever higher (FT Alphaville, 3 June 2016). Myth-busting Uber’s valuation (FT Alphaville, 1 December 2016). The taxi unicorn’s new clothes (FT Alphaville, 13 September 2016) FREE – REGISTRATION REQUIRED

Om Malik, In Silicon Valley Now, It’s Almost Always Winner Takes All (New Yorker,
30 December 2015)

Brian Piccioni and Paul Kantorovich, On Unicorns, Disruption, And Cheap Rides (BCA, 30 August 2016) BCA CLIENTS ONLY

Peter Sims, Why Peter Thiel is Dead Wrong About Monopolies (Medium, 16 September 2014)

Peter Thiel, Competition Is for Losers (Wall Street Journal, 12 September 2014)

Related Posts Uber Mathematics (Nov 2016) Uber Mathematics 3 (Dec 2016)

Steering The Enterprise of Brexit

Two contrasting approaches to Brexit from architectural thought leaders.

Dan Onions offers an eleven-step decision plan based on his DASH method, showing the interrelated decisions to be taken on Brexit as a DASH output map.

A decision plan for Brexit (Dan Onions)
A stakeholder map for Brexit (Dan Onions)


Let me now contrast Dan’s approach with Simon Wardley’s. Simon had been making a general point about strategy and execution on Twitter.

In 25 years in business, I’ve never seen a problem caused by “poor execution”. It is always crap strategy looking for someone else to blame.

— swardley (@swardley) April 29, 2016

Knowing Simon’s views on Brexit, I asked whether he would apply the same principle to the UK Government’s project to exit the European Union.

ah @richardveryard #Brexit could be an opportunity, it depends upon steps taken. Alas, in complex environments it can’t be pre-determined.

— swardley (@swardley) November 8, 2016

which goes back to first rule of strategy @richardveryard. it’s iterative. You consider as much as you can and adapt as you play the game.

— swardley (@swardley) November 8, 2016

It’s best summed up for me @richardveryard in this diagram pic.twitter.com/k8j8yciXsa

— swardley (@swardley) November 8, 2016

@swardley Nice picture. So how do you address strategic governance, given that #Brexit was supposedly about sovereignty and control.

— Richard Veryard (@richardveryard) November 8, 2016

@richardveryard : structure / governance is under doctrine (e.g. small autonomous teams, use appropriate methods, remove bias & duplication)

— swardley (@swardley) November 8, 2016

@richardveryard : whereas choice / decision / direction is under leadership.

— swardley (@swardley) November 8, 2016

@swardley You are answering in abstractions. How do you answer the concrete questions of sovereignty and control in relation to #Brexit?

— Richard Veryard (@richardveryard) November 8, 2016

Simon’s diagram revolves around purpose. OODA is a single loop, and the purpose is typically unproblematic. This reflects the UK government’s perspective on Brexit, in which the purpose is assumed to be simply realising the Will of the People. The Prime Minister regards all interpretation, choice, decision and direction as falling under her control as leader. And according to the Prime Minister’s doctrine, attempts by other stakeholders (such as Parliament or the Judiciary) to exert any governance over the process is tantamount to frustrating the Will of the People.

Whereas Dan’s notion is explicitly pluralist – trying to negotiate something acceptable to a broad range of stakeholders with different concerns. He characterizes the challenge as complex and nebulous. Even this characterization would be regarded as subversive by orthodox Brexiteers. It is depressing to compare Dan’s careful planning with Government insouciance.

Elsewhere, Simon has acknowledged that “acting upon your strategic choices (the why of movement) can also ultimately change your goal (the why of purpose)”. Many years ago, I wrote something on what I called Third-Order Requirements Engineering, which suggested that changing the requirements goal led to a change in identity – if your beliefs and desires have changed, then in a sense you also have changed. This is a subtlety that is lost on most conventional stakeholder management approaches. It will be fascinating to see how the Brexit constituency (or for that matter the Trump constituency) evolves over time, especially as they discover the truth of George Bernard Shaw’s remark.

“There are two tragedies in life. One is to lose your heart’s desire. The other is to gain it.”


Dan Onions, An 11 step Decision Plan for Brexit (6 November 2016)

Richard Veryard, Third Order Requirements Engineering (SlideShare)

Based on R.A. Veryard and J.E. Dobson, ‘Third Order Requirements Engineering: Vision and Identity’, in Proceedings of REFSQ 95, Second International Workshop on Requirements Engineering, (Jyvaskyla, Finland: June 12-13, 1995)

Simon Wardley, On Being Lost (August 2016)

Related Posts: VPEC-T and Pluralism (June 2010)

Uber Mathematics

UK Court News. Uber has lost a test case in the UK courts, in which it argued that its drivers were self-employed and therefore not entitled to the minimum wage or any benefits. Why is this ruling not quite as straightforward as it seems? To answer this question, we have to look at the mathematics of two-sided or multi-sided platforms.

Platforms exist in two states – growth and steady-state. A mature steady-state platform maintains a stable and sustainable balance between supply and demand. But to create a platform, you have to build both supply and demand at the same time. Innovative platforms such as Uber are oriented towards expansion and growth – recruiting new passengers and new drivers, and launching in new cities.

New Passengers “Every week in London, 30,000 people download Uber to their phones and order a car for the first time. The technology company, which is worth $60bn, calls this moment “conversion”. It sets great store on the first time you use its service … With Uber, the feeling should be of plenty, and of assurance: there will always be a driver when you need one.” (Knight)
New Drivers “They make it sound so simple: Sign up to drive with Uber and soon you’ll be earning an excellent supplementary income! That’s the central message in Uber’s ongoing multi-platform marketing campaign to recruit new drivers.” (McDermott)
New Cities “Uber has deployed its ride-hailing platform in 400 cities around the world since its launch in San Francisco on 31 May 2010, which means that it enters a new market every five days and eight hours. … To take over a city, Uber flies in a small team, known as “launchers” and hires its first local employee, whose job it is to find drivers and recruit riders.” (Knight)

But here’s the problem. In order to encourage passengers to rely on the service, Uber needs a surfeit of drivers. If passengers want instant availability of drivers (plenty, assurance, there will always be a driver when you need one), then Uber has to maintain a pool of under-utilized drivers. (Knowles)

Simple mathematics tells us that if Uber takes on far more drivers than it really needs, some of them won’t earn very much. Furthermore, people with little experience of this kind of work may underestimate the true costs involved, and may have an unrealistic idea of the amounts they can earn: Uber has no obvious incentive to disillusion them. (This is an example of Asymmetric Information.) Even if the average earnings of Uber drivers are well above the minimum wage, as Uber claims, it is not the average that matters here but the distribution.

The myth is that these are drivers who can choose whether to provide a service or not, so they are free agents. Libertarians wax lyrical about the “gig economy” and the benefits to passengers. However, the UK courts have judged that Uber drivers work under a series of constraints, and are therefore to be classified as “workers” for the purposes of various regulations, including minimum wage and other benefits.

Uber has announced its intention to appeal the UK judgement. But if the judgement stands, what are the implications for Uber? Firstly, Uber’s overall costs are likely to increase, and Uber will undoubtedly find a way either to pass these costs onto the passengers or to pass them back to the drivers in some other form. But more interestingly, Uber now has a financial incentive to balance supply and demand more fairly, and to avoid taking on too many drivers.

Uber sometimes argues it is merely a technology company, and is not in the transportation business. Dismissing this argument, the UK courts quoted a previous judgement from the North California District Court:

“Uber does not simply sell software; it sells rides. Uber is no more a ‘technology company’ than Yellow Cab is a ‘technology company’ because it uses CB radios to dispatch taxi cabs.”

However, Uber’s undoubted technological know-how should enable it to develop (and monetize) appropriate technologies and algorithms to manage a two-sided platform in a more balanced way.

Why is the Uber ruling not quite as straightforward as it seems? @richardveryard @ricphillips pic.twitter.com/OXIgTHvA6z

— Jeffrey Newman (@JeffreyNewman) October 29, 2016

Update: similar concerns have been raised about Amazon delivery drivers. I have previously praised Amazon on this blog for its pioneering understanding of platforms, so let’s hope that both Amazon and Uber can create platforms that are fair to drivers as well as its customers.


Mr Y Aslam, Mr J Farrar and Others -V- Uber (Courts and Tribunals Judiciary, 28 October 2016)

Sarah Butler, Uber driver tells MPs: I work 90 hours but still need to claim benefits (Guardian, 6 February 2017)

Tom Espiner and Daniel Thomas, What does Uber employment ruling mean? (BBC News, 28 October 2016)

David S. Evans, The Antitrust Economics of Multi-Sided Platform Markets (Yale Journal on Regulation, Vol 20 Issue 2, 2003). Multisided Platforms, Dynamic Competition and the Assessment of Market Power for Internet-Based Firms (CPI Antitrust Chronicle, May 2016)

Sam Knight, How Uber Conquered London (Guardian, 27 April 2016)

Kitty Knowles, 10 of the biggest complaints about Uber – from Uber drivers (The Memo, 5 November 2015)

Barry Levine, Uber opens up its API – and creates a new platform (VentureBeat, 20 August 2014)

John McDermott, I’ve done the (real) math: No way an Uber driver makes minimum wage (We Are Mel, 17 May 2016)

Hilary Osborne, Uber loses right to classify UK drivers as self-employed (Guardian, 28 October 2016)

Aaron Smith, Gig Work, Online Selling and Home Sharing (Pew Research Center, 17 November 2016)

Ciro Spedaliere, How to start a multi-sided platform (30 June 2015)

Amazon drivers ‘work illegal hours’ (BBC News, 11 November 2016)

See further discussion with @wimrampen and others on Storify: Uber Mathematics – A Discussion


Related Posts
Uber Mathematics 2 (Dec 2016) Uber Mathematics 3 (Dec 2016)
Uber’s Defeat Device and Denial of Service (March 2017)





Updated 6 February 2017

The Transparency of Algorithms

Algorithms have been getting a bad press lately, what with Cathy O’Neil’s book and Zeynap Tufekci’s TED talk. Now the German Chancellor, Angela Merkel, has weighed into the debate, calling for major Internet firms (Facebook, Google and others) to make their algorithms more transparent.

There are two main areas of political concern. The first (raised by Mrs Merkel) is the control of the news agenda. Politicians often worry about the role of the media in the political system when people only pick up the news that fits their own point of view, but this is hardly a new phenomenon. Even in the days before the Internet, few people used to read more than one newspaper, and most people would prefer to read the newspapers that confirm their own prejudices. Furthermore, there have been recent studies that show that even when you give different people exactly the same information, they will interpret it differently, in ways that reinforce their previous beliefs. So you can’t blame the whole Filter Bubble thing on Facebook and Google.

But they undoubtedly contribute further to the distortion. People get a huge amount of information via Facebook, and Facebook systematically edits out the uncomfortable stuff. It aroused particular controversy recently when its algorithms decided to censor a classic news photograph from the Vietnam war.

Update: Further criticism from Tufekci and others immediately following the 2016 US Election

2016 was a close election where filter bubbles & algorithmic funneling was weaponized for spreading misinformation. https://t.co/QCb4KG1gTV pic.twitter.com/cbgrj1TqFb

— Zeynep Tufekci (@zeynep) November 9, 2016


The second area of concern has to do with the use of algorithms to make critical decisions about people’s lives. The EU regards this as (among other things) a data protection issue, and privacy activists are hoping for provisions within the new General Data Protection Regulation (GDPR) that will confer a “right to an explanation” upon data subjects. In other words, when people are sent to prison based on an algorithm, or denied a job or health insurance, it seems reasonable to allow them to know what criteria these algorithmic decisions were based on.

Reasonable but not necessarily easy. Many of these algorithms are not coded in the old-fashioned way, but developed using machine learning. So the data scientists and programmers responsible for creating the algorithm may not themselves know exactly what the criteria are. Machine learning is basically a form of inductive reasoning, using data about the past to predict the future. As Hume put it, this assumes that “instances of which we have had no experience resemble those of which we have had experience”.

In a Vanity Fair panel discussion entitled “What Are They Thinking? Man Meets Machine,” a young black woman tried unsuccessfully to explain the problem of induction and biased reasoning to Sebastian Thrun, formerly head of Google X.

At the end of the panel on artificial intelligence, a young black woman asked Thrun whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”

In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”

In other words, Thrun assumed that whatever the machine spoke was Truth, and he wasn’t willing to acknowledge the possibility that the machine might latch onto false patterns. Even if the algorithm is correct, it doesn’t take away the need for transparency; but if there is the slightest possibility that the algorithm might be wrong, the need for transparency is all the greater. And evidence is that some of the algorithms are grossly wrong.

In this post, I’ve talked about two of the main concerns about algorithms – firstly the news agenda filter bubble, and secondly the critical decisions affecting individuals. In both cases, people are easily misled by the apparent objectivity of the algorithm, and are often willing to act as if the algorithm is somehow above human error and human criticism. Of course algorithms and machine learning are useful tools, but an illusion of infallibility is dangerous and ethically problematic.


Rory Cellan-Jones, Was it Facebook ‘wot won it’? (BBC News, 10 November 2016)

Ethan Chiel, EU citizens might get a ‘right to explanation’ about the decisions algorithms make (5 July 2016)

Kate Connolly, Angela Merkel: internet search engines are ‘distorting perception’ (Guardian, 27 October 2016)

Bryce Goodman, Seth Flaxman, European Union regulations on algorithmic decision-making and a “right to explanation” (presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY)

Mike Masnick, Activists Cheer On EU’s ‘Right To An Explanation’ For Algorithmic Decisions, But How Will It Work When There’s Nothing To Explain? (Tech Dirt, 8 July 2016)

Fabian Reinbold, Warum Merkel an die Algorithmen will (Spiegel, 26 October 2016)

Nitasha Tiku, At Vanity Fair’s Festival, Tech Can’t Stop Talking About Trump (BuzzFeed, 24 October 2016) HT @noahmccormack

Julia Carrie Wong, Mark Zuckerberg accused of abusing power after Facebook deletes ‘napalm girl’ post (Guardian, 9 September 2016)

New MIT technique reveals the basis for machine-learning systems’ hidden decisions (Kutzweil News, 31 October 2016) HT @jhagel

Video: When Man Meets Machine (Vanity Fair, 19 October 2016)

See Also
The Problem of Induction (Stanford Encyclopedia of Philosophy, Wikipedia)

Related Posts
The Shelf-Life of Algorithms (October 2016)
Weapons of Math Destruction (October 2016)

Updated 10 November 2016