1 year, 2 months ago

How Many Ethical Principles?

Link: http://feedproxy.google.com/~r/Soapbox/~3/OU1XKOWkMko/how-many-ethical-principles.html

Although ethical principles have been put forward by philosophers through the ages, the first person to articulate ethical principles for information technology was Norbert Wiener. In his book The Human Use of Human Beings, first published in 1950, Wiener based his computer ethics on what he called four great principles of justice.

Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  

Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 

Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  

Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom.”

Meanwhile, Isaac Asimov’s Three Laws of Robotics were developed in a series of short stories in the 1940s, so this was around the same time that Wiener was developing his ideas about cybernetics. Many writers on technology ethics argue that robots (or any other form of technology) should be governed by principles, and this idea is often credited to Asimov. But as far as I can recall, in every Asimov story that mentions the Three Laws of Robotics, some counter-example is produced to demonstrate that the Three Laws don’t actually work as intended. I have therefore always regarded Asimov’s work as being satirical rather than prescriptive. (Similarly J.K. Rowling’s descriptions of the unsuccessful attempts by wizard civil servants to regulate the use of magical technologies.)

So for several decades, the Wiener approach to ethics prevailed, and discussion of computer ethics was focused on a common set of human values: life, health, security, happiness, freedom, knowledge, resources, power and opportunity. (Source: SEP: Computer and Information Ethics)

But these principles were essentially no different to the principles one would find in any other ethical domain. For many years, scholars disagreed as to whether computer technology introduced an entirely new set of ethical issues, and therefore called for a new set of principles. The turning point was at the ETHICOMP1995 conference in March 1995 (just two months before Bill Gates’ Internet Tidal Wave memo), with important presentations from Walter Maner (who had been arguing this point for years) and Krystyna Górniak-Kocikowska. From this point onwards, computer ethics would have to address some additional challenges, including the global reach of the technology – beyond the control of any single national regulator – and the vast proliferation of actors and stakeholders. Terrell Bynum calls this the Górniak hypothesis.

Picking up the challenge, Luciano Floridi started to look at the ethical issues raised by autonomous and interactive agents in cyberspace. In a 2001 paper on Artificial Evil with Jeff Sanders, he stated “It is clear that something similar to Asimov’s Laws of Robotics will need to be enforced for the digital environment (the infosphere) to be kept safe.”

Floridi’s work on Information Ethics (IE) represented an attempt to get away from the prevailing anthropocentric worldview. “IE suggests that there is something even more elemental than life, namely being – that is, the existence and flourishing of all entities and their global environment – and something more fundamental than suffering, namely entropy.” He therefore articulated a set of principles concerning ontological equality (any instance of information/being enjoys a minimal, initial, overridable, equal right to exist and develop in a way which is appropriate to its nature) and information entropy (which ought not to be caused in the infosphere, ought to be prevented, ought to be removed). (Floridi 2006)

In the past couple of years, there has been a flood of ethical principles to choose from. In his latest blogpost, @Alan_Winfield lists over twenty sets of principles for robotics and AI published between January 2017 and April 2019, while AlgorithmWatch lists over fifty. Of particular interest may be the principles published by some of the technology giants, as well as the absence of such principles from some of the others. Meanwhile, Professor Floridi’s more recent work on ethical principles appears to be more conventionally anthropocentric.

The impression one gets from all these overlapping sets of principles is of lots of experts and industry bodies competing to express much the same ideas in slightly different terms, in the hope that their version will be adopted by everyone else.

But what would “adopted” actually mean? One possible answer is that these principles might feed into what I call Upstream Ethics, contributing to a framework for both regulation and action. However some commentators have expressed scepticism as to the value of these principles. For example, @InternetDaniel thinks that these lists of ethical principles are “too vague to be effective”, and suggests that this may even be intentional, these efforts being “largely designed to fail”. And @EricNewcomer says “we’re in a golden age for hollow corporate statements sold as high-minded ethical treatises”.

As I wrote in an earlier piece on principles:

In business and engineering, as well as politics, it is customary to appeal to “principles” to justify some business model, some technical solution, or some policy. But these principles are usually so vague that they provide very little concrete guidance. Profitability, productivity, efficiency, which can mean almost anything you want them to mean. And when principles interfere with what we really want to do, we simply come up with a new interpretation of the principle, or another overriding principle, which allows us to do exactly what we want while dressing up the justification in terms of “principles”. (January 2011)

The key question is about governance – how will these principles be applied and enforced, and by whom? What many people forget about Asimov’s Three Laws of Robotics was that these weren’t enforced by roving technology regulators, but were designed into the robots themselves, thanks to the fact that one corporation (U.S. Robots and Mechanical Men, Inc) had control over the relevant patents and therefore exercised a monopoly over the manufacture of robots. No doubt Google, IBM and Microsoft would like us to believe that they can be trusted to produce ethically safe products, but clearly this doesn’t address the broader problem.

Following the Górniak hypothesis, if these principles are to mean anything, they need to be taken seriously not only by millions of engineers but also by billions of technology users. And I think this in turn entails something like what Julia Black calls Decentred Regulation, which I shall try to summarize in a future post. Hopefully this won’t be just what Professor Floridi calls Soft Ethics.

Update: My post on Decentred Regulation and Responsible Technology is now available.

Algorithm Watch, AI Ethics Guidelines Global Inventory

Terrell Bynum, Computer and Information Ethics (Stanford Encyclopedia of Philosophy)

Luciano Floridi, Information Ethics – Its Nature and Scope (ACM SIGCAS Computers and Society · September 2006)

Luciano Floridi and Tim (Lord) Clement-Jones, The five principles key to any ethical framework for AI (New Statesman, 20 March 2019)

Luciano Floridi and J.W. Sanders, Artificial evil and the foundation of computer ethics (Ethics and Information Technology 3: 55–66, 2001)

Eric Newcomer, What Google’s AI Principles Left Out (Bloomberg 8 June 2018)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Alan Winfield, An Updated Round Up of Ethical Principles of Robotics and AI (18 April 2019)

Wikipedia: Laws of Robotics, Three Laws of Robotics

Related posts: The Power of Principles (Not) (January 2011), Data and Intelligence Principles from Major Players (June 2018), Ethics Soft and Hard (February 2019), Upstream Ethics (March 2019), Ethics Committee Raises Alarm (April 2019), Decentred Regulation and Responsible Technology (April 2019)

Link corrected 26 April 2019