2 months ago

The B2B Breach Trifecta: Equifax, SEC, and Deloitte

The B2B Breach Trifecta: Equifax, SEC, and Deloitte As rumors emerged this morning about a compromise of consulting firm Deloitte, this becomes the third breach announced in just a few short weeks of organizations that share a similar profile: Each one is primarily – or exclusively – a B2B organization. There are some questions worth […]

2 months, 6 days ago

Forrester Gathers Experts Across Disciplines To Tackle Europe’s Most Pressing Privacy, Security, and Trust Challenges

Fresh off a successful event in Washington, DC last week, we’re gearing up for Forrester’s Privacy & Security Forum Europe in London on 5-6 October. Forrester is gathering experts in cybersecurity, privacy, customer experience, regulatory compliance, identity management, personalization, blockchain, and a range of related topics.  Together, Forrester analysts and leaders from firms like ABN […]

2 months, 17 days ago

Forrester’s Privacy And Security Forum Brings Diverse Experts To Devious Challenges

Well, the privacy hits keep coming: another breach, more than a hundred million people affected, untold losses for another company and its customers. Next week, September 14-15 in Washington DC, Forrester is gathering experts in cybersecurity, privacy, customer experience, regulatory compliance, identity management, personalization, and a range of other related topics to bring clarity to […]

3 months, 9 days ago

US Ruling Creates A Privacy Nightmare For US Cloud Providers Overseas

On August 14th, Judge Richard Seeborg of the U.S. District Court for the Northern District of California upheld a ruling requiring Google to turn over Gmail data stored overseas. The ruling seems to be in conflict with a U.S. Court of Appeals ruling in Microsoft v. United States where the court ruled that Microsoft does […]

7 months ago

Uber’s Self-Defeat Device

Uber’s version of “rational self-interest” has led to further accusations of covert activity and unfair competitive behaviour. Rival ride company Lyft is suing Uber in the Californian courts, claiming that Uber used a secret software program known as “Hell” to invade the privacy of the Lyft drivers, in violation of the California Invasion of Privacy Act and Federal Wiretap Act.

This covert activity, if proven, would go way beyond normal competitive intelligence, such as that provided by firms like Slice Intelligence, which harvests and interprets receipts from consumer email. (Slice Intelligence has confirmed to the New York Times that it sells anonymized data from ride receipts from both Uber and Lyft, but declined to say who purchased this data.)

It has also transpired that Apple caught Uber cheating on the iPhone app, including fingerprinting and continuing to identify phones after the app was deleted, in contravention to App Store privacy guidelines. Uber CEO Travis Kalanick got a personal reprimand from Apple CEO Tim Cook, but the iPhone app remains on the App Store, and Uber continues to use fingerprinting worldwide.

Uber continues to be massively loss-making, and the mathematics remain unfavourable. So the critical question for the service economy is whether firms like Uber can ever become viable without turning themselves into defacto monopolies, either by political lobbying or by covert action.


Megan Rose Dickey, Uber gets sued over alleged ‘Hell’ program to track Lyft drivers (TechCrunch, 24 April 2017)

Mike Isaac, Uber’s CEO plays with fire (New York Times, 23 April 2017)

Andrew Liptak, Uber tried to fool Apple and got caught (The Verge, 23 April 2017)

Andrew Orlowski, Uber cloaked its spying and all it got from Apple was a slap on the wrist (The Register, 24 Apr 2017)

Olivia Solon and Julia Carrie Wong, Hell of a ride: even a PR powerhouse couldn’t get Uber on track (Guardian, 14 April 2017)


Related Posts

Uber Mathematics (Nov 2016) Uber Mathematics 2 (Dec 2016) Uber Mathematics 3 (Dec 2016)
Uber’s Defeat Device and Denial of Service (March 2017)

1 year, 25 days ago

The Transparency of Algorithms

Algorithms have been getting a bad press lately, what with Cathy O’Neil’s book and Zeynap Tufekci’s TED talk. Now the German Chancellor, Angela Merkel, has weighed into the debate, calling for major Internet firms (Facebook, Google and others) to make their algorithms more transparent.

There are two main areas of political concern. The first (raised by Mrs Merkel) is the control of the news agenda. Politicians often worry about the role of the media in the political system when people only pick up the news that fits their own point of view, but this is hardly a new phenomenon. Even in the days before the Internet, few people used to read more than one newspaper, and most people would prefer to read the newspapers that confirm their own prejudices. Furthermore, there have been recent studies that show that even when you give different people exactly the same information, they will interpret it differently, in ways that reinforce their previous beliefs. So you can’t blame the whole Filter Bubble thing on Facebook and Google.

But they undoubtedly contribute further to the distortion. People get a huge amount of information via Facebook, and Facebook systematically edits out the uncomfortable stuff. It aroused particular controversy recently when its algorithms decided to censor a classic news photograph from the Vietnam war.

Update: Further criticism from Tufekci and others immediately following the 2016 US Election

2016 was a close election where filter bubbles & algorithmic funneling was weaponized for spreading misinformation. https://t.co/QCb4KG1gTV pic.twitter.com/cbgrj1TqFb

— Zeynep Tufekci (@zeynep) November 9, 2016


The second area of concern has to do with the use of algorithms to make critical decisions about people’s lives. The EU regards this as (among other things) a data protection issue, and privacy activists are hoping for provisions within the new General Data Protection Regulation (GDPR) that will confer a “right to an explanation” upon data subjects. In other words, when people are sent to prison based on an algorithm, or denied a job or health insurance, it seems reasonable to allow them to know what criteria these algorithmic decisions were based on.

Reasonable but not necessarily easy. Many of these algorithms are not coded in the old-fashioned way, but developed using machine learning. So the data scientists and programmers responsible for creating the algorithm may not themselves know exactly what the criteria are. Machine learning is basically a form of inductive reasoning, using data about the past to predict the future. As Hume put it, this assumes that “instances of which we have had no experience resemble those of which we have had experience”.

In a Vanity Fair panel discussion entitled “What Are They Thinking? Man Meets Machine,” a young black woman tried unsuccessfully to explain the problem of induction and biased reasoning to Sebastian Thrun, formerly head of Google X.

At the end of the panel on artificial intelligence, a young black woman asked Thrun whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”

In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”

In other words, Thrun assumed that whatever the machine spoke was Truth, and he wasn’t willing to acknowledge the possibility that the machine might latch onto false patterns. Even if the algorithm is correct, it doesn’t take away the need for transparency; but if there is the slightest possibility that the algorithm might be wrong, the need for transparency is all the greater. And evidence is that some of the algorithms are grossly wrong.

In this post, I’ve talked about two of the main concerns about algorithms – firstly the news agenda filter bubble, and secondly the critical decisions affecting individuals. In both cases, people are easily misled by the apparent objectivity of the algorithm, and are often willing to act as if the algorithm is somehow above human error and human criticism. Of course algorithms and machine learning are useful tools, but an illusion of infallibility is dangerous and ethically problematic.


Rory Cellan-Jones, Was it Facebook ‘wot won it’? (BBC News, 10 November 2016)

Ethan Chiel, EU citizens might get a ‘right to explanation’ about the decisions algorithms make (5 July 2016)

Kate Connolly, Angela Merkel: internet search engines are ‘distorting perception’ (Guardian, 27 October 2016)

Bryce Goodman, Seth Flaxman, European Union regulations on algorithmic decision-making and a “right to explanation” (presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY)

Mike Masnick, Activists Cheer On EU’s ‘Right To An Explanation’ For Algorithmic Decisions, But How Will It Work When There’s Nothing To Explain? (Tech Dirt, 8 July 2016)

Fabian Reinbold, Warum Merkel an die Algorithmen will (Spiegel, 26 October 2016)

Nitasha Tiku, At Vanity Fair’s Festival, Tech Can’t Stop Talking About Trump (BuzzFeed, 24 October 2016) HT @noahmccormack

Julia Carrie Wong, Mark Zuckerberg accused of abusing power after Facebook deletes ‘napalm girl’ post (Guardian, 9 September 2016)

New MIT technique reveals the basis for machine-learning systems’ hidden decisions (Kutzweil News, 31 October 2016) HT @jhagel

Video: When Man Meets Machine (Vanity Fair, 19 October 2016)

See Also
The Problem of Induction (Stanford Encyclopedia of Philosophy, Wikipedia)

Related Posts
The Shelf-Life of Algorithms (October 2016)
Weapons of Math Destruction (October 2016)

Updated 10 November 2016

1 year, 1 month ago

85 Million Faces

It should be pretty obvious why Microsoft wants 85 million faces. According to its privacy policy

Microsoft uses the data we collect to provide you the products we offer, which includes using data to improve and personalize your experiences. We also may use the data to communicate with you, for example, informing you about your account, security updates and product information. And we use data to help show more relevant ads, whether in our own products like MSN and Bing, or in products offered by third parties. (retrieved 25 October 2016)

Facial recognition software is big business, and high quality image data is clearly a valuable asset.

But why would 85 million people go along with this? I guess they thought they were just playing a game, and didn’t think of it in terms of donating their personal data to Microsoft. The bait was to persuade people to find out how old the software thought they were.

The Daily Mail persuaded a number of female celebrities to test the software, and printed the results in today’s paper.

Computer”tell yr age” programme on my face puts me 69 https://t.co/EhEog5LQcN Haha!But why are those judged younger than they are so pleased

— mary beard (@wmarybeard) October 25, 2016

Talking of beards …

. @futureidentity If we ever reach peak data, advertisers will check photos before advertising beard accessories #personalization #TotalData

— Richard Veryard (@richardveryard) April 1, 2016

. @futureidentity So, did you ever buy that right-handed beard brush? #PeakHipster #Sinister https://t.co/kESqmUooNk #CISNOLA cc @mfratto

— Richard Veryard (@richardveryard) June 8, 2016


Kyle Chayka, Face-recognition software: Is this the end of anonymity for all of us? (Independent, 23 April 2014)

Chris Frey, Revealed: how facial recognition has invaded shops – and your privacy (Guardian, 3 March 2016)

Rebecca Ley, Would YOU  dare ask a computer how old you look? Eight brave women try out the terrifyingly simple new internet craze (Daily Mail, 25 October 2016)


TotalData™ is a trademark of Reply Ltd. All rights reserved