Dr @BenGoldacre was the keynote speaker at an IT conference I attended recently. In the context of the growing interest in technology ethics, especially AI ethics, I asked him what IT could learn from medical ethics. He responded by criticising the role of the ethics committee, and mentioned a recent case in which an ethics committee had blocked an initiative that could have collected useful data concerning the effectiveness of statins. This is an example of what Goldacre calls the ethical paradox. As he wrote in 2008,
“You can do something as part of a treatment program, entirely on a whim, and nobody will interfere, as long as it’s not potty (and even then you’ll probably be alright). But the moment you do the exact same thing as part of a research program, trying to see if it actually works or not, adding to the sum total of human knowledge, and helping to save the lives of people you’ll never meet, suddenly a whole bunch of people want to stuck their beaks in.”
Within IT, there is considerable controversy about the role of the ethics committee, especially after Google appointed and then disbanded its Ethics Board. In a recent article for Slate, @internetdaniel complains about company ethics boards offering “advice” rather than meaningful oversight, and calls this ethics theatre. @ruchowdh prefers to call it ethics washing.
So I was particularly interested to find a practical example of an ethics committee in action in this morning’s Guardian. While the outcome of this case is not yet clear, there seem to be some positive indicators in @sloumarsh‘s report.
Firstly, the topic (predictive policing) is clearly an important and difficult one. It is not just about applying a simplistic set of ethics principles, but balancing a conflicting set of interests and concerns. (As @oscwilliams reports, this topic has already got the attention of the Information Commissioner’s Office.)
Secondly, the discussion is in the open, and the organization is making the right noises. “This is an important area of work, that is why it is right that it is properly scrutinised and those details are made public.” (This contrasts with some of the bad examples of medical ethics cited by Goldacre.)
Thirdly, the ethics committee is (informally) supported by a respected external body (Liberty), which adds weight to its concerns, and has helped bring the case to public attention. (Credit @Hannah_Couchman)
Fourthly, although the ethics committee mandate only applies to a single police force (West Midlands), its findings are likely to be relevant to other police forces across the UK. For those forces that do not have a properly established governance process of their own, the default path may be to follow the West Midlands example.
So it is possible (although not guaranteed) that this particular case may produce a reasonable outcome, with a valuable contribution from the ethics committee and its external supporters. But it is worrying if this is what it takes for governance to work, because this happy combination of positive indicators will not be present in most other cases.
Ben Goldacre, Where’s your ethics committee now, science boy? (Bad Science Blog,23 February 2008), When Ethics Committees Kill (Bad Science Blog, 26 March 2011), Taking transparency beyond results: ethics committees must work in the open (Bad Science Blog, 23 September 2016)
Sarah Marsh, Ethics committee raises alarm over ‘predictive policing’ tool (The Guardian, 20 April 2019)
Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)
Jane Wakefield, Google’s ethics board shut down (BBC News, 5 April 2019)
Oscar Williams, Some of the UK’s biggest police forces are using algorithms to predict crime (New Statesman, 4 February 2019)