How much knowledge is required, in order to make a proper ethical judgement?
Assuming that consequences matter, it would obviously be useful to be able to reason about the consequences. This is typically a combination of inductive reasoning (what has happened when people have done this kind of thing in the past) and predictive reasoning (what is likely to happen when I do this in the future).
There are several difficulties here. The first is the problem of induction – to what extent can we expect the past to be a guide to the future, and how relevant is the available evidence to the current problem. The evidence doesn’t speak for itself, it has to be interpreted.
For example, when Stephen Jay Gould was informed that he had a rare cancer of the abdomen, the medical literature indicated that the median survival for this type of cancer was only eight months. However, his statistical analysis of the range of possible outcomes led him to the conclusion that he had a good chance of finding himself at the favourable end of the range, and in fact he lived for another twenty years until an unrelated cancer got him.
The second difficulty is that we don’t know enough. We are innovating faster than we can research the effects. And longer term consequences are harder to predict than short-term consequences: even if we assume an unchanging environment, we usually don’t have as much hard data about longer-term consequences.
For example, a clinical trial of a drug may tell us what happens when people take the drug for six months. But it will take a lot longer before we have a clear picture of what happens when people continue to take the drug for the rest of their lives. Especially when taken alongside other drugs.
This might suggest that we should be more cautious about actions with long-term consequences. But that is certainly not an excuse for inaction or procrastination. One tactic of Climate Sceptics is to argue that the smallest inaccuracy in any scientific projection of climate change invalidates both the truth of climate science and the need for action. But that’s not the point. Gould’s abdominal cancer didn’t kill him – but only because he took action to improve his prognosis. @Alexandria Ocasio-Cortez has recently started using the term Climate Delayers for those who find excuses for delaying action on climate change.
The third difficulty is that knowledge itself comes packaged in various disciplines or discourses. Medical ethics is dependent upon specialist medical knowledge, and technology ethics is dependent upon specialist technical knowledge. However, it would be wrong to judge ethical issues exclusively on the basis of this technical knowledge, and other kinds of knowledge (social, cultural or whatever) must also be given a voice. This probably entails some degree of cognitive diversity. Will Crouch also points out the uncertainty of predicting the values and preferences of future stakeholders.
The fourth difficulty is that there could always be more knowledge. This raises the question as to whether it is responsible to go ahead on the basis of our current knowledge, and how we can build in mechanisms to make future changes when more knowledge becomes available. Research may sometimes be a moral duty, as Tannert et al argue, but it cannot be an infinite duty.
The question of adequacy of knowledge is itself an ethical question. One of the classic examples in Moral Philosophy concerns a ship owner who sends a ship to sea without bothering to check whether the ship was sea-worthy. Some might argue that the ship owner cannot be held responsible for the deaths of the sailors, because he didn’t actually know that the ship would sink. However, most people would see the ship owner having a moral duty of diligence, and would regard him as accountable for neglecting this duty.
But how can we know if we have enough knowledge? This raises the question of the “known unknowns” and “unknown unknowns”, which is sometimes used with a shrug to imply that noone can be held responsible for the unknown unknowns.
(And who is we? J. Nathan Matias argues that the obligation to experiment is not limited to the creators of an artefact, but may extend to other interested parties.)
The French psychoanalyst Jacques Lacan was interested in the opposition between impulsiveness and procrastination, and talks about three phases of decision-making: the instant of seeing (recognizing that some situation exists that calls for a decision), the time for understanding (assembling and analysing the options), and the moment to conclude (the final choice).
The purpose of Responsibility by Design is not just to prevent bad or dangerous consequences, but to promote good and socially useful consequences. The result of applying Responsibility by Design should not be reduced innovation, but better and more responsible innovation. The time for understanding should not be dragged on forever, there should always be a moment to conclude.
Matthew Cantor, Could ‘climate delayer’ become the political epithet of our times? (The Guardian, 1 March 2019)
Will Crouch, Practical Ethics Given Moral Uncertainty (Oxford University, 30 January 2012)
Stephen Jay Gould, The Median Isn’t the Message” (Discover 6, June 1985) pp 40–42.
J. Nathan Matias, The Obligation To Experiment (Medium, 12 December 2016)
Alex Matthews-King, Humanity producing potentially harmful chemicals faster than they can test their effects, experts warn (Independent, 27 February 2019)
Christof Tannert, Horst-Dietrich Elvers and Burkhard Jandrig, The ethics of uncertainty. In the light of possible dangers, research becomes a moral duty (EMBO Rep. 8(10) October 2007) pp 892–896
Wikipedia: There are known knowns
The ship-owner example can be found in an essay called “The Ethics of Belief” (1877) by W.K. Clifford, in which he states that “it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence”.
I describe Lacan’s model of time in my book on Organizational Intelligence (Leanpub 2012)
updated 11 March 2019