Following my previous posts on Netflix, I have been reading a detailed analysis in Ed Finn’s book, What Algorithms Want (2017).Finn’s answer to my question Does Big Data Drive Netflix Content? is no, at least not directly. Although Netflix had used dat…
#ThinkingWithTheMajority #chatGPT has attracted considerable attention since its launch in November 2022, prompting concerns about the quality of its output as well as the potential consequences of widespread use and misuse of this and similar too…
In my post on the performativity of data (August 2021), I looked at some of the ways in which data and information can make something true. In this post, I want to go further. What if an an algorithm can make something final.I’ve just read a very inter…
Let us suppose we can divide the world into those who trust service companies to treat their customers fairly, and those who assume that service companies will be looking to exploit any customer weakness or lapse of attention.For example, some loyal cu…
Are algorithms trustworthy, asks @NizanGP.”Many of us routinely – and even blindly – rely on the advice of algorithms in all aspects of our lives, from choosing the fastest route to the airport to deciding how to invest our retirement savings. But shou…
Following complaints that Amazon sometimes uses excessively large boxes for packing small items, the following claim appeared on Reddit.”Amazon uses a complicated software system to determine the box size that should be used based on what else is going…
Is there a fundamental flaw in AI implementation, as @jrossCISR suggests in her latest article for Sloan Management Review? She and her colleagues have been studying how companies insert value-adding AI algorithms into their processes. A critical succe…
Link: http://feedproxy.google.com/~r/Soapbox/~3/VgZxPW9WsRI/could-we-switch-algorithms-off.html 00
In his review of Nick Bostrom’s book Superintelligence, Tim Adams suggests that Bostrom has been reading too much of the science fiction he professes to dislike. When people nowadays want to discuss the social and ethical implications of machine intell…
Perhaps you already know about Distributed Denial of Service (DDOS). In this post, I’m going to talk about something quite different, which we might call Centralized Denial of Service.
This week we learned that Uber had developed a defeat device called Greyball – a fake Uber app whose purpose was to frustrate investigations by regulators and law enforcement, especially designed for those cities where regulators were suspicious of the Uber model.
In 2014, Erich England, a code enforcement inspector in Portland, Oregon, tried to hail an Uber car downtown in a sting operation against the company. However, Uber recognized that Mr England was a regulator, and cancelled his booking.
It turns out that Uber had developed algorithms to be suspicious of such people. According to the New York Times, grounds for suspicion included trips to and from law enforcement offices, or credit cards associated with selected public agencies. (Presumably there were a number of false positives generated by excessive suspicion or Überverdacht.)
But as Adrienne Lafrance points out, if a digital service provider can deny service to regulators (or people it suspects to be regulators), it can also deny service on other grounds. She talks to Ethan Zuckerman, the director of the Center for Civic Media at MIT, who observes that
“Greyballing police may primarily raise the concern that Uber is obstructing justice, but Greyballing for other reasons—a bias against Muslims, for instance—would be illegal and discriminatory, and it would be very difficult to make the case it was going on.”
One might also imagine Uber trying to discriminate against people with extreme political opinions, and defending this in terms of the safety of their drivers. Or discriminating against people with special needs, such as wheelchair users.
Typically, people who are subject to discrimination have less choice of service providers, and a degraded service overall. But if there is a defacto monopoly, which is of course where Uber wishes to end up in as many cities as possible, then its denial of service is centralized and more extreme. Once you have been banned by Uber, and once Uber has driven all the other forms of public transport out of existence, you have no choice but to walk.
Mike Isaac, How Uber Deceives the Authorities Worldwide (New York Times, 3 March 2017)
Adrienne LaFrance, Uber’s Secret Program Raises Questions About Discrimination (The Atlantic, 3 March 2017)
When Complex Event Processing (CEP) emerged around ten years ago, one of the early applications was real-time risk management. In the financial sector, there was growing recognition for the need for real-time visibility – continuous calibration of positions – in order to keep pace with the emerging importance of algorithmic trading. This is now relatively well-established in banking and trading sectors; Chemitiganti argues that the insurance industry now faces similar requirements.
In 2008, Chris Martins, then Marketing Director for CEP firm Apama, suggested considering CEP as a prospective “dog whisperer” that can help manage the risk of the technology “dog” biting its master.
But “dog bites master” works in both directions. In the case of Eliot Spitzer, the dog that bit its master was the anti money-laundering software that he had used against others.
And in the case of algorithmic trading, it seems we can no longer be sure who is master – whether black swan events are the inevitable and emergent result of excessive complexity, or whether hostile agents are engaged in a black swan breeding programme. One of the first CEP insiders to raise this concern was John Bates, first as CTO at Apama and subsequently with Software AG. (He now works for a subsidiary of SAP.)
|from Dark Pools by Scott Patterson|
And in 2015, Bates wrote that “high-speed trading algorithms are an alluring target for cyber thieves”.
So if technology is capable of both generating unexpected events and amplifying hostile attacks, are we being naive to imagine we use the same technology to protect ourselves?
Perhaps, but I believe there are some productive lines of development, as I’ve discussed previously on this blog and elsewhere.
1. Organizational intelligence – not relying either on human intelligence alone or on artificial intelligence alone, but looking for establishing sociotechnical systems that allow people and algorithms to collaborate effectively.
2. Algorithmic biodiversity – maintaining multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate “second opinions”.
Vamsi Chemitiganti, Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares.. (10 September 2016)
Theo Hildyard, Pillar #6 of Market Surveillance 2.0: Known and unknown threats (Trading Mesh, 2 April 2015)
Neil Johnson et al, Financial black swans driven by ultrafast machine ecology (arXiv:1202.1448 [physics.soc-ph], 7 Feb 2012)
Chris Martins, CEP and Real-Time Risk – “The Dog Whisperer” (Apama, 21 March 2008)
Scott Patterson, Dark Pools – The Rise of A. I. Trading Machines and the Looming Threat to Wall Street (Random House, 2013). See review by David Leinweber, Are Algorithmic Monsters Threatening The Global Financial System? (Forbes, 11 July 2012)
Richard Veryard, Building Organizational Intelligence (LeanPub, 2012)
The Shelf-Life of Algorithms (October 2016)
@mrkwpalmer (TIBCO) invites us to take what he calls a Hyper-Darwinian approach to analytics. He observes that “many algorithms, once discovered, have a remarkably short shelf-life” and argues that one must be as good at “killing off weak or vanquished…