5 days ago

Forrester Gathers Experts Across Disciplines To Tackle Europe’s Most Pressing Privacy, Security, and Trust Challenges

Fresh off a successful event in Washington, DC last week, we’re gearing up for Forrester’s Privacy & Security Forum Europe in London on 5-6 October. Forrester is gathering experts in cybersecurity, privacy, customer experience, regulatory compliance, identity management, personalization, blockchain, and a range of related topics.  Together, Forrester analysts and leaders from firms like ABN […]

16 days ago

Forrester’s Privacy And Security Forum Brings Diverse Experts To Devious Challenges

Well, the privacy hits keep coming: another breach, more than a hundred million people affected, untold losses for another company and its customers. Next week, September 14-15 in Washington DC, Forrester is gathering experts in cybersecurity, privacy, customer experience, regulatory compliance, identity management, personalization, and a range of other related topics to bring clarity to […]

1 month, 8 days ago

US Ruling Creates A Privacy Nightmare For US Cloud Providers Overseas

On August 14th, Judge Richard Seeborg of the U.S. District Court for the Northern District of California upheld a ruling requiring Google to turn over Gmail data stored overseas. The ruling seems to be in conflict with a U.S. Court of Appeals ruling in Microsoft v. United States where the court ruled that Microsoft does […]

4 months, 29 days ago

Uber’s Self-Defeat Device

Uber’s version of “rational self-interest” has led to further accusations of covert activity and unfair competitive behaviour. Rival ride company Lyft is suing Uber in the Californian courts, claiming that Uber used a secret software program known as “Hell” to invade the privacy of the Lyft drivers, in violation of the California Invasion of Privacy Act and Federal Wiretap Act.

This covert activity, if proven, would go way beyond normal competitive intelligence, such as that provided by firms like Slice Intelligence, which harvests and interprets receipts from consumer email. (Slice Intelligence has confirmed to the New York Times that it sells anonymized data from ride receipts from both Uber and Lyft, but declined to say who purchased this data.)

It has also transpired that Apple caught Uber cheating on the iPhone app, including fingerprinting and continuing to identify phones after the app was deleted, in contravention to App Store privacy guidelines. Uber CEO Travis Kalanick got a personal reprimand from Apple CEO Tim Cook, but the iPhone app remains on the App Store, and Uber continues to use fingerprinting worldwide.

Uber continues to be massively loss-making, and the mathematics remain unfavourable. So the critical question for the service economy is whether firms like Uber can ever become viable without turning themselves into defacto monopolies, either by political lobbying or by covert action.


Megan Rose Dickey, Uber gets sued over alleged ‘Hell’ program to track Lyft drivers (TechCrunch, 24 April 2017)

Mike Isaac, Uber’s CEO plays with fire (New York Times, 23 April 2017)

Andrew Liptak, Uber tried to fool Apple and got caught (The Verge, 23 April 2017)

Andrew Orlowski, Uber cloaked its spying and all it got from Apple was a slap on the wrist (The Register, 24 Apr 2017)

Olivia Solon and Julia Carrie Wong, Hell of a ride: even a PR powerhouse couldn’t get Uber on track (Guardian, 14 April 2017)


Related Posts

Uber Mathematics (Nov 2016) Uber Mathematics 2 (Dec 2016) Uber Mathematics 3 (Dec 2016)
Uber’s Defeat Device and Denial of Service (March 2017)

10 months, 24 days ago

The Transparency of Algorithms

Algorithms have been getting a bad press lately, what with Cathy O’Neil’s book and Zeynap Tufekci’s TED talk. Now the German Chancellor, Angela Merkel, has weighed into the debate, calling for major Internet firms (Facebook, Google and others) to make their algorithms more transparent.

There are two main areas of political concern. The first (raised by Mrs Merkel) is the control of the news agenda. Politicians often worry about the role of the media in the political system when people only pick up the news that fits their own point of view, but this is hardly a new phenomenon. Even in the days before the Internet, few people used to read more than one newspaper, and most people would prefer to read the newspapers that confirm their own prejudices. Furthermore, there have been recent studies that show that even when you give different people exactly the same information, they will interpret it differently, in ways that reinforce their previous beliefs. So you can’t blame the whole Filter Bubble thing on Facebook and Google.

But they undoubtedly contribute further to the distortion. People get a huge amount of information via Facebook, and Facebook systematically edits out the uncomfortable stuff. It aroused particular controversy recently when its algorithms decided to censor a classic news photograph from the Vietnam war.

Update: Further criticism from Tufekci and others immediately following the 2016 US Election

2016 was a close election where filter bubbles & algorithmic funneling was weaponized for spreading misinformation. https://t.co/QCb4KG1gTV pic.twitter.com/cbgrj1TqFb

— Zeynep Tufekci (@zeynep) November 9, 2016


The second area of concern has to do with the use of algorithms to make critical decisions about people’s lives. The EU regards this as (among other things) a data protection issue, and privacy activists are hoping for provisions within the new General Data Protection Regulation (GDPR) that will confer a “right to an explanation” upon data subjects. In other words, when people are sent to prison based on an algorithm, or denied a job or health insurance, it seems reasonable to allow them to know what criteria these algorithmic decisions were based on.

Reasonable but not necessarily easy. Many of these algorithms are not coded in the old-fashioned way, but developed using machine learning. So the data scientists and programmers responsible for creating the algorithm may not themselves know exactly what the criteria are. Machine learning is basically a form of inductive reasoning, using data about the past to predict the future. As Hume put it, this assumes that “instances of which we have had no experience resemble those of which we have had experience”.

In a Vanity Fair panel discussion entitled “What Are They Thinking? Man Meets Machine,” a young black woman tried unsuccessfully to explain the problem of induction and biased reasoning to Sebastian Thrun, formerly head of Google X.

At the end of the panel on artificial intelligence, a young black woman asked Thrun whether bias in machine learning “could perpetuate structural inequality at a velocity much greater than perhaps humans can.” She offered the example of criminal justice, where “you have a machine learning tool that can identify criminals, and criminals may disproportionately be black because of other issues that have nothing to do with the intrinsic nature of these people, so the machine learns that black people are criminals, and that’s not necessarily the outcome that I think we want.”

In his reply, Thrun made it sound like her concern was one about political correctness, not unconscious bias. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. Sometimes they’re not politically correct,” Thrun said. “When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this.”

In other words, Thrun assumed that whatever the machine spoke was Truth, and he wasn’t willing to acknowledge the possibility that the machine might latch onto false patterns. Even if the algorithm is correct, it doesn’t take away the need for transparency; but if there is the slightest possibility that the algorithm might be wrong, the need for transparency is all the greater. And evidence is that some of the algorithms are grossly wrong.

In this post, I’ve talked about two of the main concerns about algorithms – firstly the news agenda filter bubble, and secondly the critical decisions affecting individuals. In both cases, people are easily misled by the apparent objectivity of the algorithm, and are often willing to act as if the algorithm is somehow above human error and human criticism. Of course algorithms and machine learning are useful tools, but an illusion of infallibility is dangerous and ethically problematic.


Rory Cellan-Jones, Was it Facebook ‘wot won it’? (BBC News, 10 November 2016)

Ethan Chiel, EU citizens might get a ‘right to explanation’ about the decisions algorithms make (5 July 2016)

Kate Connolly, Angela Merkel: internet search engines are ‘distorting perception’ (Guardian, 27 October 2016)

Bryce Goodman, Seth Flaxman, European Union regulations on algorithmic decision-making and a “right to explanation” (presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY)

Mike Masnick, Activists Cheer On EU’s ‘Right To An Explanation’ For Algorithmic Decisions, But How Will It Work When There’s Nothing To Explain? (Tech Dirt, 8 July 2016)

Fabian Reinbold, Warum Merkel an die Algorithmen will (Spiegel, 26 October 2016)

Nitasha Tiku, At Vanity Fair’s Festival, Tech Can’t Stop Talking About Trump (BuzzFeed, 24 October 2016) HT @noahmccormack

Julia Carrie Wong, Mark Zuckerberg accused of abusing power after Facebook deletes ‘napalm girl’ post (Guardian, 9 September 2016)

New MIT technique reveals the basis for machine-learning systems’ hidden decisions (Kutzweil News, 31 October 2016) HT @jhagel

Video: When Man Meets Machine (Vanity Fair, 19 October 2016)

See Also
The Problem of Induction (Stanford Encyclopedia of Philosophy, Wikipedia)

Related Posts
The Shelf-Life of Algorithms (October 2016)
Weapons of Math Destruction (October 2016)

Updated 10 November 2016

10 months, 30 days ago

85 Million Faces

It should be pretty obvious why Microsoft wants 85 million faces. According to its privacy policy

Microsoft uses the data we collect to provide you the products we offer, which includes using data to improve and personalize your experiences. We also may use the data to communicate with you, for example, informing you about your account, security updates and product information. And we use data to help show more relevant ads, whether in our own products like MSN and Bing, or in products offered by third parties. (retrieved 25 October 2016)

Facial recognition software is big business, and high quality image data is clearly a valuable asset.

But why would 85 million people go along with this? I guess they thought they were just playing a game, and didn’t think of it in terms of donating their personal data to Microsoft. The bait was to persuade people to find out how old the software thought they were.

The Daily Mail persuaded a number of female celebrities to test the software, and printed the results in today’s paper.

Computer”tell yr age” programme on my face puts me 69 https://t.co/EhEog5LQcN Haha!But why are those judged younger than they are so pleased

— mary beard (@wmarybeard) October 25, 2016

Talking of beards …

. @futureidentity If we ever reach peak data, advertisers will check photos before advertising beard accessories #personalization #TotalData

— Richard Veryard (@richardveryard) April 1, 2016

. @futureidentity So, did you ever buy that right-handed beard brush? #PeakHipster #Sinister https://t.co/kESqmUooNk #CISNOLA cc @mfratto

— Richard Veryard (@richardveryard) June 8, 2016


Kyle Chayka, Face-recognition software: Is this the end of anonymity for all of us? (Independent, 23 April 2014)

Chris Frey, Revealed: how facial recognition has invaded shops – and your privacy (Guardian, 3 March 2016)

Rebecca Ley, Would YOU  dare ask a computer how old you look? Eight brave women try out the terrifyingly simple new internet craze (Daily Mail, 25 October 2016)


TotalData™ is a trademark of Reply Ltd. All rights reserved

1 year, 3 months ago

As How You Drive

I have been discussing Pay As You Drive (PAYD) insurance schemes on this blog for nearly ten years.

The simplest version of the concept varies your insurance premium according to the quantity of driving – Pay As How Much You Drive. But for obvious reasons, insurance companies are also interested in the quality of driving – Pay As How Well You Drive – and several companies now offer a discount for “safe” driving, based on avoiding events such as hard braking, sudden swerves, and speed violations.

Researchers at the University of Washington argue that each driver has a unique style of driving, including steering, acceleration and braking, which they call a “driver fingerprint”. They claim that drivers can be quickly and reliably identified from the braking event stream alone.

Bruce Schneier posted a brief summary of this research on his blog without further comment, but a range of comments were posted by his readers. Some expressed scepticism about the reliability of the algorithm, while others pointed out that driver behaviour varies according to context – people drive differently when they have their children in the car, or when they are driving home from the pub.

“Drunk me drives really differently too. Sober me doesn’t expect trees to get out of the way when I honk.”

Although the algorithm produced by the researchers may not allow for this kind of complexity, there is no reason in principle why a more sophisticated algorithm couldn’t allow for it. I have long argued that JOHN-SOBER and JOHN-DRUNK should be understood as two different identities, with recognizably different patterns of behaviour and risk. (See my post on Identity Differentiation.)

However, the researchers are primarily interested in the opportunities and threats created by the possibility of using the “driver fingerprint” as a reliable identification mechanism.

  • Insurance companies and car rental companies could use “driver fingerprint” data to detect unauthorized drivers.
  • When a driver denies being involved in an incident, “driver fingerprint” data could provide relevant evidence.
  • The police could remotely identify the driver of a vehicle during an incident.
  • “Driver fingerprint” data could be used to enforce safety regulations, such as the maximum number of hours driven by any driver in a given period.

While some of these use cases might be justifiable, the researchers outline various scenarios where this kind of “fingerprinting” would represent an unjustified invasion of privacy, observe how easy it is for a third party to obtain and abuse driver-related data, and call for a permission-based system for controlling data access between multiple devices and applications connected to the CAN bus within a vehicle. (CAN is a low-level protocol, and does not support any security features intrinsically.)


Sources

Miro Enev, Alex Takakuwa, Karl Koscher, and Tadayoshi Kohno, Automobile Driver Fingerprinting Proceedings on Privacy Enhancing Technologies; 2016 (1):34–51

Andy Greenberg, A Car’s Computer Can ‘Fingerprint’ You in Minutes Based on How You Drive (Wired, 25 May 2016)

Bruce Schneier, Identifying People from their Driving Patterns (30 May 2016)

See also John H.L. Hansen, Pinar Boyraz, Kazuya Takeda, Hüseyin Abut, Digital Signal Processing for In-Vehicle Systems and Safety. Springer Science and Business Media, 21 Dec 2011

Wikipedia: CAN bus, Vehicle bus


Related Posts

Identity Differentiation (May 2006)

Pay As You Drive (October 2006) (June 2008) (June 2009)

3 years, 5 months ago

Will the Rise of the IoT Mean the Fall of Privacy?

I’m excited about the Internet of Things (IoT), and I expect it to create incredible opportunities for companies in almost every industry. But I’m also concerned that the issues of security, data privacy, and our expectations of a right to privacy, in general — unless suitably addressed — could hinder the adoption of the IoT Read more