- Risk and safety are causally connected?
Risk and safety overlap as terms in ordinary language. Some seek to formalize the said relationships—e.g., increasing safety barriers reduces risk of component or system failure. In contrast, I come from a field, policy analysis and management, that treats safety and risk to be very different. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two.
The reasons are many for not assuming that “reduce risks and you increase safety” or vice-versa:
However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.“
(Danner and Schulman, 2018)
The differences for me come with the failure scenarios—risks with respect to this set of granularities as distinct from safety with respect to that set.
- Interdisciplinary focus?
It’s de rigueur to call for more interdisciplinary research on risk and safety management in large critical infrastructures.
Yet such calls not only must surmount the standard-normal qualitative v. quantitative, reductionist v. holistic, and positivist v. post-positivist methodological divides. They must also address not only regulatory, political, and psychological differences (as in the above quote), but also societal, economic, historical, sociological, and cultural differences. And why stop there, case by case?
I’ve never read a call, routine as they are, for an interdisciplinarity granular enough to tell how to answer the preceding.
- Control risk? Control safety?
In ordinary language, it is common enough to conflate “manage” and “control.” That will not do for policy and management complexity.
Control is when the system’s input variance, process variance and output variance are rendered low and stable. Think of the nuclear reactor plant. Guns, guards and gates are used to ensure outside inputs are controlled; processes within the nuclear station are highly regulated by government to ensure few or no mistakes are made (operations and procedures that have not been analyzed beforehand are not permissible); and the output of the plant – its electricity – is kept constant, with regulated low variance (nuclear power is often considered “baseload,” on top of which are added other types of electricity generation).
One defining feature of the Anthropocene is that critical systems having low input variance/low process variance/low output variance are fewer and fewer because of increasing political, economic, social and etcetera unpredictabilities.
For example, electricity generation sources—and very important ones—now face high and higher input variability. Think of climate change, citizen and consumer unrest, regulatory failures and other external impacts on the inputs to energy production. Such have posed the challenge of managing what can no longer be controlled (if ever controllable).
In response, operational processes inside a good number of power plants have had to become more varied (this reflecting the so-called law of requisite variety), with more options and strategies to process and produce what still must be a low-variance output: namely, electricity at a regulated frequency and voltage.
So what?
When it comes to underwater petroleum exploration and production, by way of another example, alarms produced by autonomous systems can and do often turn out to be false alarms occurring under already turbulent task conditions at sea. Indeed, operating at a higher level of autonomy and having to cope with indiscriminate false alarms may no longer permit the real-time operators to revert, just-in-time, to lower levels of autonomy, e.g., managing via more manual operations, as and when nothing else works. Changes in safety have risk implications, but not necessarily symmetrically the other way round.
- Building to macro systems from micro data or micro foundations?
Discussions of macro-safety and macro-risk have long been rooted in appeals to micro-foundations for both. Yet such systems approaches have been called into question across a variety of academic fields.
Consider the repeatedly disappointing efforts in building up macroeconomic models from separate subsystem models or in grounding macroeconomics in microeconomics. It’s been said that no economist in his or her right mind would ever rely on the microfoundational Dynamic Stochastic General Equilibrium (DSGE) models of the economy developed and tinkered with over decades. (See also the disappointing history of “lifecycle modeling” for endangered and at-risk species.)
But is there an integrating mechanism at work between micro and macro? I’m not sure that even those detailed analyses revolving around the labor-augmenting rather than -substituting nature of AI software recognize that humans are the only “integrated comprehensive model” we have for some time to come. Especially when it comes to both the safety management and the risk management of such systems.
5. Are risk and safety even distinguished with sufficient granularity?
More than a year ago a joint statement was issued by the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Famously, it was signed by more than 350 AI experts and public figures.
Now, of course, we cannot dismiss the actual and potential harms of new and morphing artificial intelligence.
But, just as clearly, these 350 people must be among the last people on Earth you’d turn to for pandemic and nuclear war scenarios of sufficient granularity with respect to risks, safety and their differences against which to appraise their AI crisis scenarios.
Sources.
Danner, C., and P. Schulman (2019). “Rethinking risk assessment for public utility safety regulation.” Risk Analysis 39(5): 1044-1059.
Roe, E. (2020). “Control, Manage or Cope? A Politics for Risks, Uncertainties and Unknown-Unknowns.” Chapter 5 in The Politics of Uncertainty: Changes of Transformation (eds. Ian Scoones and Andy Stirling). Routledge, UK.
Schor, J.B. (2021). “Dependence and heterogeneity in the platform labor force.” A policy brief for the Governing Work in the Digital Age. Hertie School, Berlin.
Storm, S. (2021) “Cordon of Conformity: Why DSGE models are not the future of macroeconomics.” International Journal of Political Economy 50(2): 77-98 (DOI: 10.1080/08911916.2021.1929582).
Utne, I.B., I. Schjølberg, and E. Roe (2019). “High reliability management and control operator risks in autonomous marine systems and operations.” Ocean Engineering 171(1): 399-416.