Before we can discuss the ethics of technologically mediated nudge, we need to recognize that many of the ethical issues are the same whether the nudge is delivered by a human or a robot. So let me start by trying to identify different categories of nudge.
In its simplest form, the nudge can involve gentle persuasions and hints between one human being and another. Parents trying to influence their children (and vice versa), teachers hoping to inspire their pupils, various forms of encouragement and consensus building and leadership. In fiction, such interventions often have evil intent and harmful consequences, but in real life let’s hope that these interventions are mostly well-meaning and benign.
In contrast, there are more large-scale forms of nudge, where a team of social engineers (such as the notorious “Nudge Unit”) design ways of influencing the behaviour of lots of people, but don’t have any direct contact with the people whose behaviour is to be influenced. A new discipline has grown up, known as Behavioural Economics.
I shall call these two types unmediated and mediated respectively.
Mediated nudges may be delivered in various ways. For example, someone in Central Government may design a nudge to encourage job-seekers to find work. Meanwhile, YouTube can nudge us to watch a TED talk about nudging. Some nudges can be distributed via the Internet, or even the Internet of Things. In general, this involves both people and technology – in other words, a sociotechnical system.
To assess the outcome of the nudge, we can look at the personal effect on the nudgee or at the wider socio-economic impact, either short-term or longer-term. In terms of outcome, it may not make much difference whether the nudge is delivered by a human being or by a machine, given that human beings delivering the nudge might be given a standard script or procedure to follow, except in so far as the nudgee may feel differently about it, and may therefore respond differently. It is an empirical question whether a given person would respond more positively to a given nudge from a human bureaucrat or from a smartphone app, and the ethical difference between the two will be largely driven by this.
The second distinction involves the beneficiary of the nudge. Some nudges are designed to benefit the nudgee (Cass Sunstein calls these “paternalistic”), while others are designed to benefit the community as a whole (for example, correcting some market failure such as the Tragedy of the Commons). On the one hand, nudges that encourage people to exercise more; on the other hand, nudges that remind people to take their litter home. And of course there are also nudges whose intended beneficiary is the person or organization doing the nudging. We might think here of dark patterns, shades of manipulation, various ways for commercial organizations to get the individual to spend more time or money. Clearly there are some ethical issues here.
A slightly more complicated case from an ethical perspective is where the intended outcome of the nudge is to get the nudgee to behave more ethically or responsibly towards someone else.
Sunstein sees the “paternalistic” nudges as more controversial than nudges to address potential market failures, and states two further preferences. Firstly, he prefers nudges that educate people, that serve over time to increase rather than decrease their powers of agency. And secondly, he prefers nudges that operate at a slow deliberative tempo (“System 2”) rather than at a fast intuitive tempo (“System 1”), since the latter can seem more manipulative.
Meanwhile, there is a significant category of self-nudging. There are now countless apps and other devices that will nudge you according to a set of rules or parameters that you provide yourself, implementing the kind of self-binding or precommitment that Jon Elster described in Ulysses and the Sirens (1979). Examples include the Tomato system for time management, fitness trackers that will count your steps and vibrate when you have been sitting for too long, money management apps that allocate your spare change to your chosen charity. Several years ago, Microsoft developed an experimental Smart Bra that would detect changes in the skin to predict when a women was about to reach for the cookie jar, and give her a friendly warning. Even if there is no problem with the nudge itself (because you have consented/chosen to be nudged) there may be some ethical issues with the surveillance and machine learning systems that enable the nudge. Especially when the nudging device is kindly made available to you by your employer or insurance company.
And even if the immediate outcome of the nudge is benefical to the nudgee, in some situations there may be concerns that the nudgee becomes over-dependent on being nudged, and thereby loses some element of self-control or delayed gratification.
The final distinction I want to introduce here concerns the direction of the nudge. The most straightforward nudges are those that push an individual in the desired direction. Suggestions to eat more healthy food, suggestions to direct spare cash to charity or savings. But some forms of therapy are based on paradoxical interventions, where the individual is pushed in the opposite directly, and they react by moving in the direction you want them to go. For example, if you want someone to give up some activity that is harming them, you might suggest they carry out this activity more systematically or energetically. This is sometimes known as reverse psychology or prescribing the symptom. For example, faced with a girl who was biting her nails, the therapist Milton Erickson advised her how she could get more enjoyment from biting her nails. Astonished by this advice, which was of course in direct opposition to all the persuasion and coercion she had received from other people up to that point, she found she was now able to give up biting her nails altogether.
(Richard Bordenave attributes paradoxical intervention to Paul Watzlawick, who worked with Gregory Bateson. It can also be found in some versions of Neuro-Linguistic Programming (NLP), which was strongly influenced by both Bateson and Erickson.)
Of course, this technique can also be practised in an ethically unacceptable direction as well. Imagine a gambling company whose official message to gamblers is that they should invest their money in a sensible savings account instead of gambling it away. This might seem like an ethically noble gesture, until we discover that the actual effect on people with a serious gambling problem is that this causes them to gamble even more. (In the same way that smoking warnings can cause some people to smoke more. Possibly cigarette companies are aware of this.)
Paradoxical interventions make perfect sense in terms of systems theory, which teaches us that the links from cause to effect are often complex and non-linear. Sometimes an accumulation of positive nudges can tip a system into chaos or catastrophe, as Donella Meadows notes in her classic essay on Leverage Points.
The Leverage Point framework may also be useful in comparing the effects of nudging at different points in a system. Robert Steele notes the use of a nudge based on restructuring information flows; in contrast, a nudge that was designed to alter the nudgee’s preferences or goals or political opinions could be much more dangerously powerful, as @zeynep has demonstrated in relation to YouTube.
One of the things that complicates the ethics of Nudge is that the alternative to nudging may either be greater forms of coercion or worse outcomes for the individual. In his article on the Ethics of Nudging, Cass Sunstein argues that all human interaction and activity takes place inside some kind of Choice Architecture, thus some form of nudging is probably inevitable, whether deliberate or inadvertent. He also argues that nudges may be required on ethical grounds to the extent that they promote our core human values. (This might imply that it is sometimes irresponsible to miss an opportunity to provide a helpful nudge.) So the ethical question is not whether to nudge or not, but how to design nudges in such a way as to maximize these core human values, which he identifies as welfare, autonomy and human dignity.
While we can argue with some of the detail of Sunstein’s position, I think his two main conclusions make reasonable sense. Firstly, that we are always surrounded by what Sunstein calls Choice Architectures, so we can’t get away from the nudge. And secondly, that many nudges are not just preferable to whatever the alternative might be but may also be valuable in their own right.
So what happens when we introduce advanced technology into the mix? For example, what if we have a robot that is programmed to nudge people, perhaps using some kind of artificial intelligence or machine learning to adapt the nudge to each individual in a specific context at a specific point in time?
Within technology ethics, transparency is a major topic. If the robot is programmed to include a predictive model of human psychology that enables it to anticipate the human response in certain situations, this model should be open to scrutiny. Although such models can easily be wrong or misguided, especially if the training data set reflects an existing bias, with reasonable levels of transparency (at least for the appropriate stakeholders) it will usually be easier to detect and correct these errors than to fix human misconceptions and prejudices.
In science fiction, robots have sufficient intelligence and understanding of human psychology to invent appropriate nudges for a given situation. If we start to see more of this in real life, we could start to think of these as unmediated robotic nudges, instead of the robot merely being the delivery mechanism for a mediated nudge. But does this introduce any additional ethical issues, or merely amplify the importance of the ethical issues we are already looking at?
Finally, some people think that the ethical rules should be more stringent for robotic nudges than for other kinds of nudges. For example, I’ve heard people talking about parental consent before permitting children to be nudged by a robot. But other people might think it was safer for a child to be nudged (for whatever purpose) by a robot than by an adult human. And if you think it is a good thing for a child to work hard at school, eat her broccoli, and be kind to those less fortunate than herself, and if robotic persuasion turns out to be the most effective and child-friendly way of achieving these goals, do we really want heavier regulation on robotic child-minders than human ones?
Richard Bordenave, Comment les paradoxes permettent de réinventer les nudges (Harvard Business Review France, 30 January 2019). Adapted English version: When paradoxes inspire Nudges (6 April 2019)
Jon Elster, Ulysses and the Sirens (1979)
Jochim Hansen, Susanne Winzeler and Sascha Topolinski, When the Death Makes You Smoke: A Terror Management Perspective on the Effectiveness of Cigarette On-Pack Warnings (Journal of Experimental Social Psychology 46(1):226-228, January 2010) HT @ABMarkman
Donella Meadows, Leverage Points: Places to Intervene in a System (Whole Earth Review, Winter 1997)
Robert Steele, Implementing an integrated and transformative agenda at the regional and national levels (AtKisson, 2014)
Cass Sunstein, The Ethics of Nudging (Yale J. on Reg, 32, 2015)
Iain Thomson, Microsoft researchers build ‘smart bra’ to stop women’s stress eating (The Register, 6 Dec 2013)
Zeynep Tufecki, YouTube, the Great Radicalizer (New York Times, 10 March 2018)
Stanford Encyclopedia of Philosophy: The Ethics of Manipulation