I was involved in a debate this week, concerning whether ethical principles and standards should include weapons systems, or whether military purposes should be explicitly excluded.
On both sides of the debate, there were people who strongly disapproved of weapons systems, but this disapproval led them to two opposite positions. One side felt that applying any ethical principles and standards to such systems would imply a level of ethical approval or endorsement, which they would prefer to withhold. The other side felt that weapons systems called for at least as much ethical scrutiny as anything else, if not more, and thought that exempting weapons systems implied a free pass.
It goes without saying that people disapprove of weapons systems to different degrees. Some people think they are unacceptable in all circumstances, while others see them as a regrettable necessity, while welcoming the economic activity and technological spin-offs that they produce. It’s also worth noting that there are other sectors that attract strong disapproval from many people, including gambling, hydrocarbon, nuclear energy and tobacco, especially where these appear to rely on disinformation campaigns such as climate science denial.
It’s also worth noting that there isn’t always a clear dividing line between those products and technologies that can be used for military purposes and those that cannot. For example, although the dividing line between peaceful nuclear power and nuclear weapons may be framed as a purely technical question, this has major implications for international relations, and technical experts may be subject to significant political pressure.
While there may be disagreements about the acceptability of a given technology, and legitimate suspicion about potential use, these should be capable of being addressed as part of ethical governance. So I don’t think this is a good reason for limiting the scope.
However, a better reason for limiting the scope may be to simplify the task. Given finite time and resources, it may be better to establish effective governance for a limited scope, than taking forever getting something that works properly for everything. This leads to the position that although some ethical governance may apply to weapons systems, this doesn’t mean that every ethical governance exercise must address such systems. And therefore it may be reasonable to exclude such systems from a specific exercise for a specific time period, provided that this doesn’t rule out the possibility of extending the scope at a later date.
Update. The US Department of Defense has published a high-level set of ethical principles for the military use of AI. Following the difference of opinion outlined above, some people will think it matters how these principles are interpreted and applied in specific cases (since like many similar sets of principles, they are highly generic), while other people will think any such discussion completely misses the point.
David Vergun, Defense Innovation Board Recommends AI Ethical Guidelines(US Dept of Defense, 1 November 2019)