Link: http://feedproxy.google.com/~r/Soapbox/~3/VgZxPW9WsRI/could-we-switch-algorithms-off.html 00
Perhaps you already know about Distributed Denial of Service (DDOS). In this post, I’m going to talk about something quite different, which we might call Centralized Denial of Service.
This week we learned that Uber had developed a defeat device called Greyball – a fake Uber app whose purpose was to frustrate investigations by regulators and law enforcement, especially designed for those cities where regulators were suspicious of the Uber model.
In 2014, Erich England, a code enforcement inspector in Portland, Oregon, tried to hail an Uber car downtown in a sting operation against the company. However, Uber recognized that Mr England was a regulator, and cancelled his booking.
It turns out that Uber had developed algorithms to be suspicious of such people. According to the New York Times, grounds for suspicion included trips to and from law enforcement offices, or credit cards associated with selected public agencies. (Presumably there were a number of false positives generated by excessive suspicion or Überverdacht.)
But as Adrienne Lafrance points out, if a digital service provider can deny service to regulators (or people it suspects to be regulators), it can also deny service on other grounds. She talks to Ethan Zuckerman, the director of the Center for Civic Media at MIT, who observes that
“Greyballing police may primarily raise the concern that Uber is obstructing justice, but Greyballing for other reasons—a bias against Muslims, for instance—would be illegal and discriminatory, and it would be very difficult to make the case it was going on.”
One might also imagine Uber trying to discriminate against people with extreme political opinions, and defending this in terms of the safety of their drivers. Or discriminating against people with special needs, such as wheelchair users.
Typically, people who are subject to discrimination have less choice of service providers, and a degraded service overall. But if there is a defacto monopoly, which is of course where Uber wishes to end up in as many cities as possible, then its denial of service is centralized and more extreme. Once you have been banned by Uber, and once Uber has driven all the other forms of public transport out of existence, you have no choice but to walk.
Mike Isaac, How Uber Deceives the Authorities Worldwide (New York Times, 3 March 2017)
Adrienne LaFrance, Uber’s Secret Program Raises Questions About Discrimination (The Atlantic, 3 March 2017)
When Complex Event Processing (CEP) emerged around ten years ago, one of the early applications was real-time risk management. In the financial sector, there was growing recognition for the need for real-time visibility – continuous calibration of positions – in order to keep pace with the emerging importance of algorithmic trading. This is now relatively well-established in banking and trading sectors; Chemitiganti argues that the insurance industry now faces similar requirements.
In 2008, Chris Martins, then Marketing Director for CEP firm Apama, suggested considering CEP as a prospective “dog whisperer” that can help manage the risk of the technology “dog” biting its master.
But “dog bites master” works in both directions. In the case of Eliot Spitzer, the dog that bit its master was the anti money-laundering software that he had used against others.
And in the case of algorithmic trading, it seems we can no longer be sure who is master – whether black swan events are the inevitable and emergent result of excessive complexity, or whether hostile agents are engaged in a black swan breeding programme. One of the first CEP insiders to raise this concern was John Bates, first as CTO at Apama and subsequently with Software AG. (He now works for a subsidiary of SAP.)
|from Dark Pools by Scott Patterson|
And in 2015, Bates wrote that “high-speed trading algorithms are an alluring target for cyber thieves”.
So if technology is capable of both generating unexpected events and amplifying hostile attacks, are we being naive to imagine we use the same technology to protect ourselves?
Perhaps, but I believe there are some productive lines of development, as I’ve discussed previously on this blog and elsewhere.
1. Organizational intelligence – not relying either on human intelligence alone or on artificial intelligence alone, but looking for establishing sociotechnical systems that allow people and algorithms to collaborate effectively.
2. Algorithmic biodiversity – maintaining multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate “second opinions”.
John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)
Vamsi Chemitiganti, Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares.. (10 September 2016)
Theo Hildyard, Pillar #6 of Market Surveillance 2.0: Known and unknown threats (Trading Mesh, 2 April 2015)
Neil Johnson et al, Financial black swans driven by ultrafast machine ecology (arXiv:1202.1448 [physics.soc-ph], 7 Feb 2012)
Chris Martins, CEP and Real-Time Risk – “The Dog Whisperer” (Apama, 21 March 2008)
Scott Patterson, Dark Pools – The Rise of A. I. Trading Machines and the Looming Threat to Wall Street (Random House, 2013). See review by David Leinweber, Are Algorithmic Monsters Threatening The Global Financial System? (Forbes, 11 July 2012)
Richard Veryard, Building Organizational Intelligence (LeanPub, 2012)
The Shelf-Life of Algorithms (October 2016)
@mrkwpalmer (TIBCO) invites us to take what he calls a Hyper-Darwinian approach to analytics. He observes that “many algorithms, once discovered, have a remarkably short shelf-life” and argues that one must be as good at “killing off weak or vanquished…