Unstudied conditions are avoided as vigilantly as possible—right now, when it matters—by control room operators of large critical infrastructures mandated to operate reliably and safely systemwide. Having failed to fail because an operator was behaving ignorantly is orthogonal to high reliability management.
That said, ignorance has differentiated functions in large socio-technical systems—but in ways not captured by the happy-talk of trial-and-error learning, Experiment!, and innovation-starts-with-ignorance. Five under-recognized positives deserve highlighting:
(1) A longstanding proposition in organization theory and management has been that operators and managers cannot know everything and something like bounded rationality is required in order to decide and manage. More, a mandate for comprehensive decisionmaking would undermine reliability management at the complex system level, not enhance it. It is in these senses that the operations of other infrastructures with which a control room is interconnected are “unstudied conditions” for that control room. Either these connected services are there or, if not, this has to be worked around by that control room. Real-time management by a control room is so knowledge-intensive that its operators cannot be expected to understand just intensively how the other interconnected infrastructures and their control centers operate.
(2) The comfort zone of control room operators includes managing nonmeasured or unmeasurable uncertainties so as to stay out of unstudied conditions—unknown unknowns—about which system operators are by definition ignorant. The uncertainties are not denominated as calculable risk, but still operators may know more about consequences than likelihoods, or vice versa. Operators undertake uncertainty management because they differentiate uncertainties—albeit outsider experts often collapse uncertainties into ignorance per se.
(3) Large system control operators do innovate, and positively so, within their comfort zone. We see their improvisation in control room assembly of options just-in-time under conditions of high volatility (high unpredictability or uncontrollability in the outside environment). In fact, the evolutionary advantage of control rooms lies in the skills and expertise of its operators to operationally redesign in real time what is otherwise inadequate technology-push or demand-pull in order to meet the reliability mandates of the infrastructure.
There is a kind of learning-through-error-management going on, but the learners do so by avoiding having to test the limits of system survival.
What control operators of critical infrastructures do not do—or resist doing—is classic trial and error learning and experimentation. Why? Because professionals will not deliberately chance the first error becoming the last trial (trial-and-error as probe-and-explode). Certainly the view—“It’s almost impossible to innovate if you’re not prepared to fail”—is orthogonal to the innovation-positive we observed in critical infrastructures.
It’s important to be very clear on this point: While an innovation introduced by consultants or upper management may inadvertently throw operators into unstudied conditions that decrease options, increase their task volatility or reduce their maneuverability, this does not mean the innovation a priori always fails to do so later on under changed conditions. It is depends on what is being learned in real time, if anything, for the next steps ahead by the operators; learning (e.g., from operational redesigns in real time) can change the conditions for innovation introduction, though again this is not guaranteed nor is it a free pass for future innovation.
The wider point here is about the unique contribution of high reliability management to innovation: The former provides the ground and context for determining if and how innovation turns into innovation-positive or innovation-negative without being pessimistically dystopian with respect to technology or overly utopian with respect to economic growth that depends on the infrastructures concerned.
(Note that “overly”. Utopians are, in my view, too often criticized for what is by definition their optimism. Utopians, after all, are the ones more likely to ask: What kind of society designs its critical infrastructures in such a way that not managing them reliably is more costly than having to manage them reliably, all the time 24/7/365? How long can our reliability professionals provide just far enough and just soon enough what is a never-fully-ready just enough? No wonder utopians without answers want something better!)
(4) That said, some unknown-unknowns may be key to something like an infrastructure’s immune system for managing under risk and uncertainty. The complex and interconnected nature of large socio-technical systems suggests that “low-level” accidents, lapses or even sabotage may be underway that systemwide reliability professionals–like control room operators and their support staff–do not (cannot) observe, know about, or otherwise appreciate. This is less “ignorance is bliss,” than ignorance as mithridatic (immunizing through difficulty and inexperience rather than, say, homeopathically).
(5) Last but not least: When unstudied conditions and unknown-unknowns are feared because of the awful consequences associated with behaving ignorantly, the ensuing dread promotes having to manage dangerous complex technologies more reliably and safely than theories of tight coupling and complex interactivity suggest. (Think when the Germans feared inflation more than cancer.) Wide societal dread of systemwide failure takes on a positive function in these cases, without which the real-time management of dangerous technologies would not be warranted, let alone warrantable.
(It’s at this point that someone complains I’m advocating “the manufacture of dread for the purposes of social control through taken-for-granted technologies.” Which is oddly unreflexive on their part if they really believe what they say, since the very infrastructures they criticize enable them to render such judgment, here and now, and since their criticisms are presumably then a form of artificial negativity manufactured for the same social control.)
The upshot of the five features is this. There are cases where experimentation and innovation are recast in the face of unstudied conditions. The resulting differences, however, are many and vary substantially from what outsiders typically narrow down to Experiment! Adapt! Be resilient! Indeed, when you think about any valorized list of “key strategies important in the face ignorance,” you realize just how conservative are many outsider imaginaries: If such lists are said to capture almost everything really important, then maybe nothing’s all that important after all.