I
It shouldn’t be surprising that the more comprehensive a theory of materialist determinism, the easier to find the exceptions.
It is understandably more common then that the view of bureaucratic and authoritarian governments wielding infrastructural power for their own interests is contrasted to those empirical cases indicating quite otherwise. In these comparisons, the infrastructures are demonstrably weak, in need of great repair and maintenance, perform far less effectively than designed, planned or promised, require massive repurposing in light of the Energy Transition, and where the infrastructures’ real-time operators are barely managing or coping precisely because they don’t have the kind of control that many discussions attribute to “infrastructural power.”
Yes, critical infrastructures—even in their variably existing, heterogeneously performing conditions—still have major bearing on all those material factors taken to be important ranging from income and wealth inequalities and well-being to national and international versions of growth, prosperity and sustainability. But that impact is more differentiated and case-by-case than over-arching theories of materialist determinism allow. The former’s keywords and terms include: unpredictable, inadvertent, unintended, contingent, with many intervening variables.
II
For example, it’s routine to say that governments have allocative, distributive, regulatory and stabilization functions. In actual fact, infrastructures exercise a different and more variegated form of societal regulation by prioritizing systemwide reliability and safety as social values in real time. This matters for any understanding of “infrastructural power.”
Consider the commonplace that regulatory compliance is “the baseline for risk mitigation in infrastructures.” There is no reason to assume that compliance is the same baseline for, inter alios, the infrastructure’s eyes-and-ears staff on the ground; the infrastructure’s headquarters’ compliance staff responsible for monitoring industry practices for meeting government mandates; the senior officials in the infrastructure who see the need for more enterprise risk management; and, last but never least, the infrastructure’s reliability professionals—its real-time operations personnel, should they exist, and immediate support staff— in the middle of all this, especially in their role of surmounting any stickiness by way of official procedures and protocols undermining real-time system reliability and safety.
Stickiness? Noncompliance may be a regulatory error for the regulator of record; the same noncompliance may be an important option for ensuring system reliability when the task environment indicates the said regulation to be defective. Indeed for real-time operations, noncompliance is not an error, if following that regulation jeopardizes infrastructure reliability and safety now or in the next steps ahead. Indeed, the importance of time from discovery to correction of error reinforces a process of dispersed regulatory functions, where one of the regulatory functions of the infrastructure’s real-time operations is to catch and correct for error by the regulator of record under conditions of mandated reliability.
III
True, governments rely on infrastructures to meet their own functions and, yes, there is an overlap and dependency between both as the case of compliance illustrates. Few, however, think to ask, let alone study, how critical infrastructures—many of which are privately owned or managed in the US—independently and differentially affect society-wide risks, social values and societal regulation. It’s all well and good to stress there are other social values than reliability and safety. But it also useful to remind ourselves that much, if not most, of the world is characterized by unreliable and/or unsafe critical services—notably water and electricity—even where there are infrastructures of sorts providing the services.
So yes of course, operating these infrastructures, reliably or otherwise, create inequalities and exclusions. But wouldn’t you want to know before changing them the likely effects on systemwide reliability and safety of that change, however well-intended? (The chief lesson of Policy Analysis 101 is: The opposite of good are good intentions.) Even low-cost, more sustainable socio-technical systems will be reliable only up to that unpredictable failure ahead they can’t or haven’t prevent. They too will have to manage or cope because they too can’t control the Anthropocene when it comes to an infrastructure’s inputs, processes and outputs.
More, you needn’t be clairvoyant to realize that the Energy Transition–whether in its reformist or radical versions–means a host of second chances for critical infrastructures and their mandated provision of reliable services.
With or without Stop-Oil, infrastructures will remain central to energy provision and interconnectivity; with or without Sustainability, reliability and safety will be demanded across that interconnected provision. Technologies and system configurations will change, but even the keywords of radical versions of the Energy Transition—transformative, emancipatory—are redolent with the promise of second chances along the way.
What makes the second chances so important? For one thing the Climate Emergency portends all manner of illiquidity, not least of which are today’s infrastructures being tomorrow’s stranded assets. But “stranded” underscores the place-based character of the infrastructure. Stranded also implies the possibility of its other use(s), second chances in other words. One has to wonder if current Energy Transition scenarios are granular enough to take them seriously.
IV
Why is increased granularity of scenarios important? Critical infrastructures are themselves importantly differentiated. Some have centralized operation rooms or floors; others do not. Even those with an operations center vary majorly with respect to the reliability of their critical services. In particular, they may well be operating to different standards of reliability, from which follows they are managing for different risks and uncertainties.
For instance, it is true that nuclear explosions occur, dams are overtopped, and grids do separate and island, but these events are rare–rare because of their real-time management beyond the defects of technology and design–and when these events do happen they serve to reinforce a societal dread that they indeed are must-never-happen events. Real-time system operators seek to preclude must-never-happen events like loss of nuclear containment, cryptosporidium contamination of urban water supplies, or jumbo jets dropping from the sky because of that widespread societal dread.[1] (Which of course can change, and not just because of the Anthropocene).
In contrast, financial services have “should-never-happen events”—bank runs should be avoided and financial crises shouldn’t happen. The standard of operating reliability is not one of precluding financial crises from ever happening, but rather of treating these crises (1) as avoidable though not always, or (2) as inevitable (“busts are part of market capitalism”) or at least (3) compensable after the fact (as in the pre-2008 assurance that it’s better to clean up after a financial bubble bursts than trying to manage it beforehand).
So what? Well for one thing, not having highly reliable financial services based on must-never-happen events has major consequences for standards of economic stability and growth (also variously defined). At the macro level, there are two different standards of economic reliability: The retrospective standard holds the economy is performing reliably when there have been no major shocks or disruptions from then to now. The prospective standard holds the economy is reliable only until the next major shock.
Why does the difference matter? In practical terms, the economy is prospectively only as reliable as its critical infrastructures are reliable, right now when it matters for economic productivity (again, broadly writ). Indeed, if economy and productivity were equated only with recognizing and capitalizing on retrospective patterns and trends, economic policymakers and managers could never be reliable prospectively.
By way of example, a retrospective orientation to where we are today is to examine economic and financial patterns and trends since, say, 2008; a prospective standard would be to ensure that–at a minimum–the 2008 financial recovery could be replicated, if not bettered, for the next global financial crisis. The problem with the latter–do no worse in the financial services sector than what happened in the last (2008) crisis–is that benchmark would have to reflect a must-never-happen event going forward. What, though, are the chances it would be the first-ever must-never-happen event among all of that sectors’ should-never-happen ones?
V
Not only do these reliability standards differ, so too do the risk and uncertainties that follow from managing to the respective standards. The classic case is the one emergency within and across infrastructures infrequently discussed: suicide for fear of death.
What else can we do, senior executives and company boards tell themselves, when business is entirely on the line? In this emergency, we have to risk failure in order to succeed!
But what if the business is in a critical service sector? Here, when upper management seeks to implement risk-taking changes, they rely on real-time reliability professionals, who, when they take risks, only do so in order to reduce the chances of failure. To reliability-seeking professionals in critical infrastructures, the risk-taking activities of their upper management look like a form of suicide for fear of death.
This has become an all-too-common phenomenon. When professionals are compelled to reverse practices they know to be reliable, the results are deadly. Famously in the Challenger accident, engineers had been required up to the day of that flight to show why the shuttle could launch; on that day, the decision rule was reversed to one showing why launch couldn’t take place.
Once it was good bank practice to hold capital as a cushion against unexpected losses; capital security arrangements now mandate they hold capital against losses expected from their high-risk lending. Mortgage brokers traditionally made money on the performance and quality of mortgages they made; in the run-up to the 2008 financial crisis, their compensation changed to one based on the volume of loans originated but passed on.
Originally, the Deepwater Horizon rig had been drilling an exploration well; that status changed when on April 15 2010 BP applied to the U.S. Minerals Management Service (MMS) to convert the site to a production well. The MMS approved by the change. The explosion occurred five days later.
In brief, ample evidence exists that decision rule reversals that required professionals in high-stakes situations to turn inside out the way they managed for reliability have instead led to system failures: NASA was never the same; we are still trying (in 2024!) to get out of the 2008 financial mess and the Great Recession that followed; the MMS disappeared from the face of the earth.
“But, that’s a strawman,” you protest. “Of course, we wouldn’t deliberately push reliability professionals into unstudied conditions in critical support sectors, if we could avoid it.” Really? The oft-recommended approach, Be-Prepared-for-All-Hazards, looks like the counsel of wisdom. It however is dangerous if it flips mandates to requiring organizations to cooperate around new or far more variables, using information they will not have or cannot obtain, for all manner of interconnected scenarios, which if treated with equal seriousness, produce considerable modeling and analytic uncertainties.
VI
Just as risk and uncertainty differ in critical infrastructures (probabilities and consequences of failure are variously known or not), so too reliability and safety are not one and the same. Just because you reduce risk doesn’t mean you thereby improve safety. It is true that risk and safety overlap as terms in ordinary language. Some seek to formalize the purported relationships—e.g., increasing safety barriers reduces risk of component or system failure.
In contrast, I come from a field, policy analysis and management, that treats safety and risk to be very different. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two. The reasons are many for not assuming that “reduce risks and you increase safety” or “increase safety and you reduce risks.” In particular:
However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.
C. Danner and P. Schulman (2019). Rethinking risk assessment for public utility safety regulation. Risk Analysis 39(5), 1044-1059
Once again, the differences come with the failure scenarios—risks with respect to this failure scenario’s set of granularities as distinct from safety with respect to a different set of granularities or even a different failure scenario altogether.
VII
That failure scenarios do differ is nowhere better demonstrated than in the fact that there are different fields of infrastructure studies. For a world where bureaucratic and authoritarian states exert infrastructural power to further their own interests—well, that is the failure of concern. But there are other schools of infrastructure studies. Here I focus on what a socio-cultural perspective has to say about infrastructure repair that a socio-technical perspective might wish to pursue further. Since my work is from the socio-technical perspective, it’s only fair that I not try to summarize positions from a socio-cultural perspective but quote from their work directly:
For all of their impressive heaviness, infrastructures are, at the end of the day, often remarkably light and fragile creatures—one or two missed inspections, suspect data points, or broken connectors from disaster. That spectacular failure is not continually engulfing the systems around us is a function of repair: the ongoing work by which “order and meaning in complex sociotechnical systems are maintained and transformed, human value is preserved and extended, and the complicated work of fitting to the varied circumstances of organizations, systems, and lives is accomplished” . . . .
It reminds us of the extent to which infrastructures are earned and re-earned on an ongoing, often daily, basis. It also reminds us (modernist obsessions notwithstanding) that staying power, and not just change, demands explanation. Even if we ignore this fact and the work that it indexes when we talk about infrastructure, the work nonetheless goes on. Where it does not, the ineluctable pull of decay and decline sets in and infrastructures enter the long or short spiral into entropy that—if untended—is their natural fate.
S. Jackson (2015) Repair. Theorizing the contemporary: The infrastructure toolbox. Cultural Anthropology website. Available at: https://culanth.org/fieldsights/repair (accessed 24 September 2015)
The nod to “sociotechnical systems” is welcome as is the recognition that these systems have to be managed–a great part of which is repair and maintenance–in order to operate. Added to routine and non-routine maintenance and repair are the just-in-time or just-for-now workarounds (software and hardware) that are necessitated by inevitable technology, design and regulatory glitches–inevitable because comprehensiveness in analysis and operations is impossible to achieve in complex large-scale systems.
For its part, socio-technical research on infrastructures calls into question any assumption that macro-designs control every important micro-operation, an assumption also very much questioned in this socio-cultural perspective, e.g., “approaching infrastructure from the standpoint of repair highlights actors, sites, and moments that have been absented or silenced by stories of design and origination, whether critical or heroic.” Here the test of efficacy isn’t ‘‘Have we designed a system that can be controlled?,’’ but rather ‘‘Is this a system we can manage to redesign as needed?’’
Also from the socio-technical perspective, the “end of infrastructure operations” isn’t decay, decline or entropy from a socio-cultural perspective as much as system failure and immediate emergency response, including seeking to restore, as quickly as possible, even if temporarily, water, electricity and telecoms to survivors. What to my knowledge has not been pursued in the socio-technical literature is the following from a socio-cultural focus on repair:
Attending to repair can also change how we approach questions of value and valuation as it pertains to the infrastructures around us. Repair reminds us that the loop between infrastructure, value, and meaning is never fully closed at points of design, but represents an ongoing and sometimes fragile accomplishment. While artifacts surely have politics (or can), those politics are rarely frozen at the moment of design, instead unfolding across the lifespan of the infrastructure in question: completed, tweaked, and sometimes transformed through repair. Thus, if there are values in design there are also values in repair—and good ethical and political reasons to attend not only to the birth of infrastructures, but also to their care and feeding over time.
That the values expressed through repair (we would say, expressed as the practices of actual repair) need to be understood as thoroughly as actual design reflects, I believe, a major research gap in the socio-technical literature with which I am familiar (the latter being much more concerned with the gap between designs-to-control and practices-for-managing/coping). Finally, I cannot over-stress the importance of infrastructure fragility, contrary to any sturdy-monolith imaginary of infrastructural power one might have gotten from elsewhere.
[1] Not only is societal dread important, but so is operator distrust. One reason infrastructure operators manage reliably is that they actively distrust the future will be stable or predictable in the absence of the system’s vigilant real-time management. We of course must wonder at the perversity of this. But that is the function of this dread and distrust. Namely: to push all of us in probing further what it means to privilege reliability and safety over other societal values. We are meant to ask: What would it look like in world where such reliability and safety are not so privileged? For the answer to that question is again obvious: Most of the planet already lives in that world of unreliability and little safety. We’re meant to ask, precisely because that answer is that clear.