#paxtechnica Today I was at the @CRASSHlive conference in Cambridge to hear a series of talks and panel discussions on The Implications of the Internet of Things. For a comprehensive account, see @LaurieJ’s livenotes.
When I read Philip Howard’s book last week, I wondered why he had devoted so much of his book to such internet phenomena as social media and junk news, when the notional topic of the book was the Internet of Things. His keynote address today made the connection much clearer. While social media provides data about attitudes and aspirations, the internet of things provides data about behaviour. When these different types of data are combined, this produces a much richer web of information.
For example, Howard mentioned a certain coffee company that wanted to use IoT sensors to track the entire coffee journey from farm to disposed cup. (Although another speaker expressed scepticism about the value of this data, arguing that most of the added value of IoT came from actuators rather than sensors.)
To the extent that the data involves personal information, this raises political concerns. Some of the speakers today spoke of surveillance capitalism, and there were useful talks on security and privacy. (See separate post on Risk and Security)
In his 2014 essay on the Internet of Things, Bruce Sterling characterizes the Internet of Things as “an epic transformation: all-purpose electronic automation through digital surveillance by wireless broadband”. According to Sterling, powerful stakeholders like the slogan ‘Internet of Things’ “because it sounds peaceable and progressive”.
Peaceable? Howard uses the term Pax. This refers to a period in which the centre is stable and relatively peaceful, although the periphery may be marked by local skirmishes and violence (p7). His historical examples are the Pax Romana, the Pax Britannica and the Pax Americana. He argues that we are currently living in a similar period, which he calls Pax Technica.
For Howard, “a pax indicates a moment of agreement between government and the technology industry about a shared project and way of seeing the world” (p6). This seems akin to Gramsci’s notion of cultural hegemony, “the idea that the ruling class can manipulate the value system and mores of a society, so that their view becomes the world view or Weltanschauung” (Wikipedia).
But whose tech? Howard has documented significant threats to democracy from foreign governments using social media bots to propagate junk news. There are widespread fears that this propaganda has had a significant effect on several recent elections. And if the Russians are often mentioned in the context of social media bots and junk news, the Chinese are often mentioned in the context of dodgy Internet of Things devices. While some political factions in the West are accused of collaborating with the Russians, and some commercial interests (notably pharma) may be using similar propaganda techniques, it seems odd to frame this as part of a shared project between government and the technology industry. Howard’s research indicates a new technological cold war, in which techniques originally developed by the authoritarian regimes to control their own citizens are repurposed to undermine and destabilize democratic regimes.
David Runciman talked provocatively about government of the things, by the things, for the things. (Someone from the audience linked this, perhaps optimistically, to Bruno Latour’s Parliament of Things.) But Runciman’s formulation foregrounds the devices (the “things”) and overlooks the relationships behind the devices (the “internet of”). (This is related to Albert Borgmann’s notion of the Device Paradigm.) As consumers we may spend good money on products with embedded internet-enabled devices, then we discover that these devices don’t truly belong to ourselves but remain loyal to their manufacturers. They monitor our behaviour, they may refuse to work with non-branded spare parts, or they may terminate service altogether. As Ian Steadman reports, it’s becoming more and more common for everyday appliances to have features we don’t expect. (Worth reading Steadman’s article in full. He also quotes some prescient science fiction from Philip K Dick’s 1969 novel Ubik.) “Very soon your house will betray you” warns architect Rem Koolhaas (Guardian 12 March 2014).
There are important ethical questions here, relating to non-human agency and the Principal-Agent problem.
But the invasion of IoT into our lives doesn’t stop there. McGuirk worries that “our countless daily actions and choices around the house become what define us”, and quotes a line from Dave Eggers’ 2013 novel, The Circle
“Having a matrix of preferences presented as your essence, as the whole you? … It was some kind of mirror, but it was incomplete, distorted.”
So personal identity and socioeconomic status may become precarious. This needs more thinking about. In the meantime, here is a quote from Teston.
“Wearable technologies … are non-human actors that interact with other structural conditions to determine whose bodies count.”
Dan Herman, Dave Eggers’ “The Circle” — on tech, big data and the human component (Metaweird, Oct 2013)
Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)
Justin McGuirk, Honeywell, I’m Home! The Internet of Things and the New Domestic Landscape (e-flux #64 April 2015)
John Naughton, 95 Theses about Technology (31 October 2017)
Ian Steadman, Before we give doors and toasters sentience, we should decide what we’re comfortable with first (New Statesman, 10 February 2015)
Bruce Sterling, The Epic Struggle of the Internet of Things (2014). Extract via BoingBoing (13 Sept 2014)
Christa Teston, Rhetoric, Precarity, and mHealth Technologies (Rhetoric Society Quarterly, 46:3, 2016) pp 251-268