In 2001-2, Julia Black published some papers discussing the concept of Decentred Regulation, with particular relevance to the challenges of globalization. In this post, I shall summarize her position as I understand it, and apply it to the topic of responsible technology.
Black identifies a number of potential failures in regulation, which are commonly attributed to command and control (CAC) regulation – regulation by the state through the use of legal rules backed by (often criminal) sanctions.
- instrument failure – the instruments used (laws backed by sanctions) are inappropriate and unsophisticated
- information and knowledge failure – governments or other authorities have insufficient knowledge to be able to identify the causes of problems, to design solutions that are appropriate, and to identify non-compliance
- implementation failure – implementation of the regulation is inadequate
- motivation failure and capture theory – those being regulated are insufficiently inclined to comply, and those doing the regulating are insufficiently motivated to regulated in the public interest
For Black, decentred regulation represents an alternative to CAC regulation, based on five key challenges. These challenges echo the ideas of Michel Foucault around governmentality, which Isabell Lorey (2005, p23) defines as “the structural entanglement between the government of a state and the techniques of self-government in modern Western societies”.
- complexity – emphasising both causal complexity and the complexity of interactions between actors in society (or systems), which are imperfectly understood and change over time
- fragmentation – of knowledge, and of power and control. This is not just a question of information asymmetry; no single actor has sufficient knowledge, or sufficient control of the instruments of regulation.
- interdependencies – including the co-production of problems and solutions by multiple actors across multiple jurisdictions (and amplified by globalization)
- ungovernability – Black explains this in terms of autopoiesis, the self-regulation, self-production and self-organisation of systems. As a consequence of these (non-linear) system properties, it may be difficult or impossible to control things directly
- the rejection of a clear distinction between public and private – leading to rethinking the role of formal authority in governance and regulation
In response to these challenges, Black describes a form of regulation with the following characteristics
- hybrid – combining governmental and non-governmental actors
- multifaceted – using a number of different strategies simultaneously or sequentially
- indirect – this appears to link to what (following Teubner) she calls reflexive regulation – for example setting the decision-making procedures within organizations in such a way that
the goals of public policy are achieved
And she asks if it counts as regulation at all, if we strip away much of what people commonly associate with regulation, and if it lacks some key characteristics, such as intentionality or effectiveness. Does regulation have to be what she calls “cybernetic”, which she defines in terms of three functions: standard-setting, information gathering and behaviour modification? (Other definitions of “cybernetic” are available, such as Stafford Beer’s Viable Systems Model.)
Meanwhile, how does any of this apply to responsible technology? Apart from the slogan, what I’m about to say would be true of any large technology company, but I’m going to talk about Google, for no other reason than its former use of the slogan “Don’t Be Evil”. (This is sometimes quoted as “Do No Evil”, but for now I shall ignore the difference between being evil and doing evil.) What holds Google to this slogan is not primarily government regulation (mainly US and EU) but mostly an interconnected set of other forces, including investors, customers (much of its revenue coming from advertising), public opinion and its own workforce. Clearly these stakeholders don’t all have the same view on what counts as Evil, or what would be an appropriate response to any specific ethical concern.
If we regard each of these stakeholder domains as a large-scale system, each displaying complex and sometimes apparently purposive behaviour, then the combination of all of them can be described as a system of systems. Mark Maier distinguished between three types of System of System (SoS), which he called Directed, Collaborative and Virtual; Philip Boxer identifies a fourth type, which he calls Acknowledged.
- Directed – under the control of a single authority
- Acknowledged – some aspects of regulation are delegated to semi-autonomous authorities, within a centrally planned regime
- Collaborative – under the control of multiple autonomous authorities, collaborating voluntarily to achieve an agreed purpose
- Virtual – multiple authorities with no common purpose
Black’s notion of “hybrid” clearly moves from the Directed type to one of the other types of SoS. But which one? Where technology companies are required to interpret and enforce some rules, under the oversight of a government regulator, this might belong to the Acknowledged type. For example, social media platforms being required to enforce some rules about copyright and intellectual property, or content providers being required to limit access to those users who can prove they are over 18. (Small organizations sometimes complain that this kind of regime tends to favour larger organizations, which can more easily absorb the cost of building and implementing the necessary mechanisms.)
However, one consequence of globalization is that there is no single regulatory authority. In Data Protection, for example, the tech giants are faced with different regulations in different jurisdictions, and can choose whether to adopt a single approach worldwide, or to apply the stricter rules only where necessary. (So for example, Microsoft has announced it will apply GDPR rules worldwide, while other technology companies have apparently migrated personal data of non-EU citizens from Ireland to the US in order to avoid the need to apply GDPR rules to these data subjects.)
But although the detailed rules on privacy and other ethical issues vary significantly between countries and jurisdictions, there is a reasonably broad acceptance of the principle that some privacy is probably a Good Thing. Similarly, although dozens of organizations have published rival sets of ethical principles for AI or robotics or whatever, there appears to be a fair amount of common purpose between them, indicating that all these organizations are travelling (or pretending to travel) in more or less the same direction. Therefore it seems reasonable to regard this as the Collaborative type.
Decentred regulation raises important questions of agency and purpose. And if it is to be maintain relevance and effectiveness in a rapidly changing technological world, there needs to be some kind of emergent / collective intelligence conferring the ability to solve not only downstream problems (making judgements on particular cases) but also upstream problems (evolving governance principles and practices).
Julia Black, Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World(Current Legal Problems, Volume 54, Issue 1, 2001) pp 103–146
Julia Black, Decentred Regulation (LSE Centre for Analysis of Risk and Regulation, 2002)
Martin Innes, Bethan Davies and Morag McDermont, How Co-Production Regulates (Social and Legal Studies, 2008)
Mark W. Maier, Architecting Principles for Systems-of-Systems (Systems Engineering, Vol 1 No 4, 1998)
Isabell Lorey, State of Insecurity (Verso 2015)
Gunther Teubner, Substantive and Reflexive Elements in Modern Law (Law and Society Review, Vol. 17, 1983) pp 239-285
Wikipedia: Don’t Be Evil,