1 month, 7 days ago

The Nudge as a Speech Act

Link: http://feedproxy.google.com/~r/DemandingChange/~3/KK4aVybNPNk/the-nudge-as-speech-act.html

As I said in my previous post, I don’t think we can start to think about the ethics of technology nudges without recognizing the complexity of real-world nudges. So in this post, I shall look at how nudges are communicated in the real world, before considering what their artificial analogues might look like.

Once upon a time, nudges were physical rather than verbal – a push on the shoulder perhaps, or a dig in the ribs with an elbow. The meaning was elliptical and depended almost entirely on context. “Nudge nudge, wink wink”, as Monty Python used to say.

Even technologically mediated nudges can sometimes be physical, or what we should probably call haptic. For example, the fitness band that vibrates when it thinks you have been sitting for too long.

But many of the acts we now think of as nudges are delivered verbally, as some kind of speech act. But which kind?

The most obvious kind of nudge is a direct suggestion, which may take the form of a weak command. (“Try and eat a little now.”) But nudges can also take other illocutionary forms, including questions (“Don’t you think the sun is very hot here?”) and statements / predictions (“You will find that new nose of yours very useful to spank people with.”).

(Readers familiar with Kipling may recognize my examples as the nudges given by the Bi-Coloured-Python-Rock-Snake to the Elephant’s Child.)

The force of a suggestion may depend on context and tone of voice. (A more systematic analysis of what philosophers call illocutionary force can be found in the Stanford Encyclopedia of Philosophy, based on Searle and Vanderveken 1985.)

@tonyjoyce raises a good point about tone of voice in electronic messages. Traditionally robots don’t do tone of voice, and when a human being talks in a boring monotone we may describe their speech as robotic. But I can’t see any reason why robots couldn’t be programmed with more varied speech patterns, including tonality, if their designers saw the value of this.

Meanwhile, we already get some differentation from electronic communications. For example I should expect an electronic announcement to “LEAVE THE BUILDING IMMEDIATELY” to have a tone of voice that conveys urgency, and we might think it is inappropriate or even unethical to use the same tone of voice for selling candy. We might put this together with other attention-seeking devices, such as flashing red text. The people who design clickbait clearly understand illocutionary force (even if they aren’t familiar with the term). 

A speech act can also gain force by being associated with action. If I promise to donate money to a given charity, this may nudge other people to do the same; but if they see me actually putting the money in the tin, the nudge might be much stronger. But then the nudge might be just as strong if I just put the money in the tin without saying anything, as long as everyone sees me do it. The important point is that some communication takes place, whether verbal or non-verbal, and this returns us to something closer to the original concept of nudge.

From an ethical point of view, there are particular concerns about unobtrusive or subliminal nudges. Yeung has introduced the concept of the Hypernudge, which combines three qualities: nimble, unobtrusive and highly potent. I share her concerns about this combination, but I think it is helpful to deal with these three qualities separately, before looking at the additional problems that may arise when they are combined.

Proponents of the nudge sometimes try to distinguish between unobtrusive (acceptable) and subliminal (unacceptable), but this distinction may be hard to sustain, and many people quote Luc Bovens’ observation that nudges “typically work better in the dark”. See also Baldwin.

I’m sure there’s more to say on this topic, so I may update this post later. Relevant comments always welcome.

Robert Baldwin, From regulation to behaviour change: giving nudge the third degree (The Modern Law Review 77/6, 2014) pp 831-857

Luc Bovens, The Ethics of Nudge. In Mats J. Hansson and Till Grüne-Yanoff (eds.), Preference Change: Approaches from Philosophy, Economics and Psychology. (Berlin: Springer, 2008) pp. 207-20

John Danaher, Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy (Institute for Ethics and Emerging Technologies, 17 January 2017)

J. Searle and D. Vanderveken, Foundations of Illocutionary Logic (Cambridge: Cambridge University Press, 1985)

Karen Yeung, ‘Hypernudge’: Big Data as a Mode of Regulation by Design (Information, Communication and Society (2016) 1,19; TLI Think! Paper 28/2016)

Stanford Encyclopedia of Philosophy: Speech Acts

Related post: On the Ethics of Technologically Mediated Nudge (May 2019)

Updated 28 May 2019. Many thanks to @tonyjoyce