The philosopher J.L. Austin observed that sometimes words didn’t merely describe reality, they enacted something. A commonly cited example is that when a suitably authorized person pronounces a couple married, it is the speaking of these words that makes them true. Austin called this a performative utterance; later writers usually refer to this as performativity.
In this post, I want to explore some ways in which data and information may be performative.
In my previous post on Data as Pictures, I mentioned the self-fulfilling power of labels. For example, when a person is labelled and treated as a potential criminal, this may make it more difficult for them to live as a law-abiding citizen, and they are therefore steered towards a life of crime. Thus the original truth of the data becomes almost irrelevant, because the data creates its own truth. Or as Bowker and Star put it, “classifications … have material force in the world” (p39).
Many years ago, I gave a talk at King’s College London which included some half-formed thoughts on the philosophy of information. I included some examples where it might seem rational to use information even if you don’t believe it.
Keynes attributed the waves of optimism and pessimism that sweep through a market to something he called animal spirits. Where there is little real information, even false information may be worth acting upon. So imagine that a Wall Street astrologer publishes a daily star chart of the US president, and this regularly affects the stock market. Not because many people actually believe in astrology, but because many people want to be one step ahead of the few people who do believe in astrology. Even if nobody takes astrology seriously, but they all think that other people might take it seriously, then they will collectively act as if they do take it seriously. Fiction functioning as truth.
(There was an astrologer in the White House during the Reagan administration, so this example didn’t seem so far-fetched at that time.)
For my second example, I imagined the head of a sugar corporation going on television to warn the public about a possible shortage of sugar. Consumers typically respond to this kind of warning by stockpiling, leaving the supermarket shelves empty of sugar. So this is another example of a self-fulfilling prophecy – a speech act that created its own truth.
I then went on to imagine the converse. Suppose the head of the sugar corporation went on television to reassure the public that there was no possibility of a sugar shortage. A significant number of consumers could reason either that the statement is false, or that even if the statement is true many consumers won’t believe it. So to be on the safe side, better buy a few extra bags of sugar. Result – sugar shortage.
So here we seem to have a case where two opposite statements can appear to produce exactly the same result.
Back in the 1980s I was talking about opinions, from a person with a known status or reputation, published or broadcast in what we now call traditional media. So what happens when these opinions are disconnected from the person and embedded in dashboards and algorithms?
It’s not difficult to find examples where data produces its own reality. If a recommendation algorithm identifies a new item as a potential best-seller, this item will be recommended to a lot of people and – not surprisingly – it becomes a best-seller. Obviously this doesn’t work all the time, but it is hard to deny that these algorithms contribute significantly to the outcomes that they appear to predict. Meanwhile YouTube identifies people who may be interested in extreme political content, some of whom then become interested in extreme political content. And then there’s Facebook’s project to “connect the world”. There are real-world effects here, generated by patterns of data.
Another topic to consider is the effects produced by measurement and targets. On the one hand, there is a view that measuring performance helps to motivate improvements, which is why you often see performance dashboards prominantly displayed in offices. On the other hand, there is a widespread concern that excessive focus on narrowly defined targets (“target culture”) distorts or misdirects performance – for example, teachers teaching to the test. Hannah Fry’s article contains several examples of this, which is sometimes known as Goodhart’s Law. Either way, there is an expectation that measuring something has a real-world effect, whether positive or negative.
If you can think of any other examples of the performativity of data, please comment below.
Geoffrey Bowker and Sarah Leigh Star, Sorting Things Out (MIT Press, 1999)
Hannah Fry, What Data Can’t Do (New Yorker, 22 March 2021)
Richard Veryard, Speculation and Information: The Epistemology of Stock Market Fluctuations (Invited presentation, King’s College London, 16 November 1988). Warning – the theory needs a complete overhaul, but the examples are interesting.
Related posts: Target Setting: What You Measure Is What You Get (April 2005), Ethical Communication in a Digital Age (November 2018), Algorithms and Governmentality (July 2019), Data as Pictures (August 2021), Can Predictions Create Their Own Reality (August 2021). Rob Barratt of Bodmin kindly contributed a poem on target culture in the comments below my Target Setting post.
Links added 27 August 2021