Many good things must be said about the current literature on risk and safety, but here I introduce four provocations. To telegraph ahead, assumptions in this literature must be surfaced and questioned more closely.
- Risk and safety are causally related?
Risk and safety overlap as terms in ordinary language. Some seek to formalize the purported relationships—e.g., increasing safety barriers reduces risk of component or system failure. In contrast, I come from a field, policy
analysis and management, that treats safety and risk to be very different. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two in his work.
The reasons are many for not assuming that “reduce risks and you increase safety” or “increase safety and you reduce risks.” For example:
However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.“ (Danner and Schulman, 2018)
Once again, the differences come with the failure scenarios—risks with respect to this set of granularities as distinct from safety with respect to that set.
- Interdisciplinary focus?
It’s de rigueur
to call for more interdisciplinary research on risk and safety management in large socio-technical systems, including critical infrastructures.
Yet such calls not only must surmount the standard-normal qualitative v. quantitative, reductionist v. holistic, and positivist v. post-positivist “divides.” They must also address these differences: “risk management versus safety
management” denotes not only regulatory, political, and psychological differences (as in the above quote), but also societal, economic, historical, sociological, and cultural differences. And why stop there, when other differences (neurological?) matter case by case?
I’ve never read a call, routine as they are, for interdisciplinarity granular enough to tell how to do this.
- Control risk? Control safety?
Policy analysis and management insist Very-Big-Differences between “control” and “manage,” as in when: We have to manage risk or safety because we cannot control them.
In ordinary language, it is common enough to conflate “manage” and “control.” That will not do for policy and management complexity. Control is when the system’s input variance, process variance and output variance are
rendered low and stable. Think of the nuclear reactor plant: guns, guards and gates are used to ensure outside inputs are controlled; processes within the nuclear station are highly regulated by government to ensure few or no mistakes are made (operations and procedures that have not been analyzed beforehand are not permissible); and the output of the plant – its electricity – is kept constant, with regulated low variance (nuclear power is often considered “baseload,” on top of which are added other types of electricity generation).
One crux of the Anthropocene is that critical systems having low input variance/low process variance/low output variance are fewer and fewer because of increasing political, economic, social and etcetera unpredictabilities.
It is the case that electricity generation sources—and very important ones—now face high and higher input variability. Think of climate change, citizen and consumer unrest, regulatory failures and other external impacts on the inputs to energy production. Such have posed the challenge of managing what can no longer be controlled (if ever it was)
. In response, operational processes inside a good number of power plants have had to become more varied (this reflecting the so-called law of requisite variety), with more options and strategies to process and produce what still must be a low-variance output: namely, electricity at a regulated frequency and voltage.
It’s hard to believe, in other words, that all the talk about, say, artificial intelligence (AI) “controlling” behavior will not need to be far more nuanced and differentiated, when it comes to really-existing policy and management implications.
One example will have to suffice from underwater oil and gas exploration. Alarms produced by autonomous systems can and do often turn out to be false alarms occurring under already turbulent task conditions at sea. Indeed,
operating at a higher level of autonomy and having to cope with indiscriminate false alarms may no longer permit the real-time operators to revert, just-in-time, to lower levels of autonomy, e.g., managing via more manual operations, as and when nothing else works.
- Building to macro systems from micro data or micro foundations?
This integrated-systems approach has been called into question across a variety of academic fields, e.g., the repeatedly disappointed efforts in building up macroeconomic models from separate subsystem models or in grounding
macroeconomics in micro-economics. It’s been said that no economist in his or her right mind would ever rely on the micro-foundational Dynamic Stochastic General Equilibrium (DSGE) models of the economy developed and tinkered with over decades. (See also the disappointing history of “lifecycle modeling” for threatened species.)
Note, though, this is not equivalent to the complaint against the reductionism of First, simplify! We all simplify; we don’t all then go about trying to integrate the simplifications or assume a simplified discussion of
system components—start with the worker!, the task!, or that AI software!—is the right place to begin discussion of complex systems in actual operations.
I’m not sure that even those detailed analyses revolving around the labor augmenting rather than substituting nature of AI software recognize that humans are the only “integrated comprehensive model” we will have for some time to come.
Danner, C., and P. Schulman (2019). “Rethinking risk assessment for public utility safety regulation.” Risk Analysis
Roe, E. (2020). “Control, Manage or Cope? A Politics for Risks, Uncertainties and Unknown-Unknowns.” Chapter 5 in The Politics of Uncertainty: Changes of Transformation
(eds. Ian Scoones and Andy Stirling). Routledge, UK.
Schor, J.B. (2021). “Dependence and heterogeneity in the platform labor force.” A policy brief for the Governing Work in the Digital Age. Hertie School, Berlin.
Storm, S. (2021) “Cordon of Conformity: Why DSGE models are not
the future of macroeconomics.” International Journal of Political Economy
50(2): 77-98 (DOI: 10.1080/08911916.2021.1929582).
Utne, I.B., I. Schjølberg, and E. Roe (2019). “High reliability management and control operator risks in autonomous marine systems and operations.” Ocean Engineering
Earlier blog entries: “Recasting the policy narrative of labor-substituting technological change” and “Lifecycle modeling of species”