GUEST BLOG POST, FROM DREW GREEN As a voracious media consumer, I’m exposed to a mind-numbing amount of advertising. Enough that trends start to emerge. One recent trend standing out more than others is an influx of “cause marketing” tactics, where bra…
Ten days ago, when I wrote the post “Uber and the Cost of a Culture of Corruption”, I said that assuming there will be negative consequences (both legal and financial) from the incidents in the news, then it is in Uber’s best interests to fix the problem that led to them in the first place. […]
Even before I hit the “Publish” button on Monday’s post, “Regulating Software Development”, I had already started composing this post in my head. In that post I had used the words “corrupt culture” in passing. I needed to expand on that, because I believe that’s what lies at the heart of Uber’s cascading collection of […]
Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet: Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber: It’s an understandable question. It’s a reasonable question. It’s one that came up back […]
Algorithms have been getting a bad press lately, what with Cathy O’Neil’s book and Zeynap Tufekci’s TED talk. Now the German Chancellor, Angela Merkel, has weighed into the debate, calling for major Internet firms (Facebook, Google and others) to make their algorithms more transparent.
There are two main areas of political concern. The first (raised by Mrs Merkel) is the control of the news agenda. Politicians often worry about the role of the media in the political system when people only pick up the news that fits their own point of view, but this is hardly a new phenomenon. Even in the days before the Internet, few people used to read more than one newspaper, and most people would prefer to read the newspapers that confirm their own prejudices. Furthermore, there have been recent studies that show that even when you give different people exactly the same information, they will interpret it differently, in ways that reinforce their previous beliefs. So you can’t blame the whole Filter Bubble thing on Facebook and Google.
But they undoubtedly contribute further to the distortion. People get a huge amount of information via Facebook, and Facebook systematically edits out the uncomfortable stuff. It aroused particular controversy recently when its algorithms decided to censor a classic news photograph from the Vietnam war.
Update: Further criticism from Tufekci and others immediately following the 2016 US Election
2016 was a close election where filter bubbles & algorithmic funneling was weaponized for spreading misinformation. https://t.co/QCb4KG1gTV pic.twitter.com/cbgrj1TqFb
— Zeynep Tufekci (@zeynep) November 9, 2016
The second area of concern has to do with the use of algorithms to make critical decisions about people’s lives. The EU regards this as (among other things) a data protection issue, and privacy activists are hoping for provisions within the new General Data Protection Regulation (GDPR) that will confer a “right to an explanation” upon data subjects. In other words, when people are sent to prison based on an algorithm, or denied a job or health insurance, it seems reasonable to allow them to know what criteria these algorithmic decisions were based on.
Reasonable but not necessarily easy. Many of these algorithms are not coded in the old-fashioned way, but developed using machine learning. So the data scientists and programmers responsible for creating the algorithm may not themselves know exactly what the criteria are. Machine learning is basically a form of inductive reasoning, using data about the past to predict the future. As Hume put it, this assumes that “instances of which we have had no experience resemble those of which we have had experience”.
In a Vanity Fair panel discussion entitled “What Are They Thinking? Man Meets Machine,” a young black woman tried unsuccessfully to explain the problem of induction and biased reasoning to Sebastian Thrun, formerly head of Google X.
At the end of the panel on artificial intelligence, a young black woman asked Thrun whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”
In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”
In other words, Thrun assumed that whatever the machine spoke was Truth, and he wasn’t willing to acknowledge the possibility that the machine might latch onto false patterns. Even if the algorithm is correct, it doesn’t take away the need for transparency; but if there is the slightest possibility that the algorithm might be wrong, the need for transparency is all the greater. And evidence is that some of the algorithms are grossly wrong.
In this post, I’ve talked about two of the main concerns about algorithms – firstly the news agenda filter bubble, and secondly the critical decisions affecting individuals. In both cases, people are easily misled by the apparent objectivity of the algorithm, and are often willing to act as if the algorithm is somehow above human error and human criticism. Of course algorithms and machine learning are useful tools, but an illusion of infallibility is dangerous and ethically problematic.
Rory Cellan-Jones, Was it Facebook ‘wot won it’? (BBC News, 10 November 2016)
Ethan Chiel, EU citizens might get a ‘right to explanation’ about the decisions algorithms make (5 July 2016)
Kate Connolly, Angela Merkel: internet search engines are ‘distorting perception’ (Guardian, 27 October 2016)
Bryce Goodman, Seth Flaxman, European Union regulations on algorithmic decision-making and a “right to explanation” (presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY)
Mike Masnick, Activists Cheer On EU’s ‘Right To An Explanation’ For Algorithmic Decisions, But How Will It Work When There’s Nothing To Explain? (Tech Dirt, 8 July 2016)
Fabian Reinbold, Warum Merkel an die Algorithmen will (Spiegel, 26 October 2016)
Nitasha Tiku, At Vanity Fair’s Festival, Tech Can’t Stop Talking About Trump (BuzzFeed, 24 October 2016) HT @noahmccormack
Julia Carrie Wong, Mark Zuckerberg accused of abusing power after Facebook deletes ‘napalm girl’ post (Guardian, 9 September 2016)
New MIT technique reveals the basis for machine-learning systems’ hidden decisions (Kutzweil News, 31 October 2016) HT @jhagel
Video: When Man Meets Machine (Vanity Fair, 19 October 2016)
Updated 10 November 2016
Another follow-on from the fiction theme as described in the two previous posts. This extract from the collaborative transmedia-project gives a bit more detail on the underlying backstory and storyworld that I’m setting up for, and the crucial impact on…
As in the previous post, I’ve been saying for a while that I’m moving more towards fiction as a way of explaining the core ideas of my work. This extract is from the early stages of what I intend to…
To be applied in practice, the company values must be embedded in every day behaviour, in enterprise processes, in meeting protocols, at regular reviews… so that any development, body and action shall consider them.
We all have values, rules, policies, standards… in the enterprise that guide its operation, behaviour and further development. They keep the enterprise effective and honest, or, at least, they should.
Values are essentially a m…
Thanks to Volkswagen, we now have an idea of the cost of failing to maintain an ethical culture, roughly $18 billion US (emphasis added in the quoted text below by me): Volkswagen’s financial disclosure on Friday, in a preliminary earnings report, came a day after the company agreed on the outlines of a plan to […]
Right now there’s an interesting (to me, anyway!) discussion going on within the Enterprise Architecture Network community on LinkedIn, on the role of ethics in EA, and its relationship with EA as a profession. I’ve added a few quick comments…
Greger Wikstrand and I have been trading posts about architecture, innovation, and organizations as systems (a list of previous posts can be found at the bottom of the page) for quite a while now. His latest, “Technology permeats innovation”, touches on an important point – the need for IT to add value and not just […]