Infrastructure and reliability: 4 examples where conventional wisdom falls short

1. Second chances in the energy transition

2. The positive functions of social dread, blind spots and organizational setbacks for infrastructures and their reliability

3. Retrospective versus prospective standards of reliability in the financial sector

4. “Once nuclear power plants have been in operation long enough, we’ll see more major accidents more of the time”: Yes or No?


Second chances in the energy transition

Attempting to enact political and economic transformation without infrastructure support – without a way of pumping water, growing food, or delivering healthcare – is like doing origami with smoke. No matter how ambitious your scheme, how virtuosic your technique, the folds vanish as soon as you make them. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8414030/

You needn’t be clairvoyant to realize that the energy transition–whether in its reform or radical versions–means a host of second chances for critical infrastructures and their provision of reliable services.

With or without Stop-Oil, infrastructures will remain central to energy provision and interconnectivity; with or without Sustainability, reliability will be demanded across that interconnected provision. Yes, of course, technologies and system configurations will change, but even the keywords of radical versions–transformative, emancipatory–are redolent with the promise of second chances along the way. So too for the concepts and practices of infrastructure and reliability.

So what? For one thing the Climate Emergency portends all manner of illiquidity, not least of which are today’s critical infrastructures being tomorrow’s stranded assets. But stranded underscores the place-based character of the assets. Stranded also implies the possibility of other uses for the infrastructure. Stranded, in other words, means taking the places for second chances very seriously. Will the energy transition(s) be granular enough to do so?


The positive functions of social dread, blind spots and organizational setbacks for infrastructures and their reliability

Proposition: Under conditions of social complexity (more elements, more interconnections, more differentiated functions), what is negative in effect can also be positive when the mandates are for high reliability in service provision.

Social dread

Every day, nuclear plant disasters, airplane crashes, water-supply collapse—and more—are avoided that would have happened had not operators and managers in these large systems prevented their occurrence.

Why? Societal dread is so intense that these events must be precluded from happening on an active, continuous basis. (It might be better to say that we don’t altogether know the degree of “societal dread” unless we observe how knowledgeable professionals operate and manage hazardous critical infrastructures.)

There is such fear of what would happen if large interconnected electricity, telecommunications, water, transportation, financial services and like did fail that it is better to manage them than not have them. Here, ironically, distrust is as core as trust. One reason infrastructure operators manage reliably is that they actively distrust the future will be stable or predictable in the absence of the system’s vigilant real-time management.

We of course must wonder at the perversity of this. But that is the function of this dread and distrust. Namely: to push all of us in probing further what it means to privilege social and individual reliability and safety over other values. We are meant to ask: What would it look like in world where such reliability and safety are not so privileged?

For the answer to that question is obvious: Most of the planet already lives in that world of unreliability and little safety. We’re meant to ask, precisely because that answer is that clear.

Blind Spots

Another way to describe hazardous sociotechnical systems is that they have significant blind spots when it comes to their management, some visible others not.

For example, my state’s department of motor vehicles handbook states:

Blind Spots
Every vehicle has blind spots. These are areas around the vehicle that a driver cannot see when looking straight ahead or using the mirrors. For most vehicles, the blinds spots are at the sides, slightly behind the driver. To check your blind spots, look over your right and left shoulders out of your side windows. Only turn your head when you look. Do not turn your whole body or steering wheel.
https://www.dmv.ca.gov/portal/handbook/california-driver-handbook/

“Driving a vehicle” is, in other words, managing the way drivers do in much part because of the vehicular blind spots posed for those drivers.

The broader point is that blind spots in sociotechnical systems represent a mix of both weaknesses and strengths for their managers. You get all the advantages and the disadvantages of driving a car in comparison to otherwise driving a tractor-trailer or horse-and-buggy. Or the same point from the opposite direction, you get all the advantages and disadvantages of managing micro-grids in comparison to otherwise having to manage the current electric transmission and distribution grids.

Organizational Setbacks

Setbacks—unanticipated, unwanted, and often sudden interruptions and checks on moving forward—are fairly common and typically treated as negative in complex systems and organizations.

Less discussed are the conditions under which such setbacks are positive. Arguably best known is when a complex organization transitions from one stage of a life cycle to another by overcoming obstacles characteristic of the stage in which the organization finds itself. An example is CAISO (the California Independent System Operator of the state’s main electric transmission) moving from its startup phase in the late 1990s to its full ongoing operations at present in 2024.

Other positive setbacks serve as a test bed for developing better practices, whatever the stage the organization finds itself. Some setbacks are better thought of as design probes for whether that organization is on the “right track,” or if not, what track it could/should be on. In yet other circumstances, setbacks serve to point managers in the direction of things about which they had been unaware but which matter.

For example, among all its negative features, did the 2008 financial crisis also serve as a timely interruption to remind us how central regulators are to the continuity of complex financial and credit systems? Did the crisis end up as a much-needed probe of how well the financial and credit institutions are keeping their sectors on track and under mandate? Was the 2008 crisis a test bed for more anticipatory strategies in credit lending and investing? Did the crisis in effect serve as an obstacle, whose surmounting has been necessary to promote the operational redesign of the financial and credit sectors in more reliable ways?

Note the obviously mixed answers to these questions mean the setbacks cannot be considered a priori negative.

Upshot: Complex is as positive as it gets in critical infrastructures mandated to be highly reliable.


Retrospective versus prospective standards of reliability in the financial sector

I

Nuclear explosions occur, dams are overtopped, and grids do separate and island, but these events are rare–rare because of their management beyond technology and design–and when they do happen they serve to reinforce their must-never-happen dread.

In contrast, financial services have “should-never-happen events”—bank runs should be avoided and financial crises shouldn’t happen. The standard of operating reliability is not one of precluding financial crises from ever happening, but rather of treating these crises (1) as avoidable though not always, or (2) as inevitable (“busts are part of market capitalism”) or at least (3) compensable after the fact (as in the pre-2008 assurance that it’s better to clean up after a financial bubble bursts than trying to manage it beforehand).

Not having reliability of financial services based on must-never-happen events has major consequences for standards of economic stability and growth.

II

At the macro level, there are two different standards of economic reliability: The retrospective standard holds the economy is performing reliably when there have been no major shocks or disruptions from then to now. The prospective standard holds the economy is reliable only until the next major shock or downturn.

Why does the difference matter? In practical terms, the economy is prospectively only as reliable as its critical infrastructures are reliable, right now when it matters for economic productivity. Indeed, if economy and productivity were equated only with recognizing and capitalizing on retrospective patterns and trends, economic policymakers and managers could never be reliable prospectively.

III

For example, a retrospective orientation to where we are today is to examine economic and financial patterns and trends since, say, 2008; a prospective standard would be to ensure that–at a minimum–the 2008 financial recovery could be replicated, if not bettered, for the next global financial crisis.

The problem with the latter–do no worse in the financial services sector than what happened in the last (2008) crisis–is that benchmark would have to reflect a must-never-happen event for the sector going forward.

What, though, are the chances it would be the first-ever must-never-happen event among all of that sectors’ should-never-happen ones?


“Once nuclear power plants have been in operation long enough, we’ll see more major accidents more of the time”: Yes or No?

By way of answering, yes or no, let’s start with an anecdote:

There is an apocryphal story about Frederick Mosteller, a famous professor of statistics at Harvard University. Sometime in the 1950s, a student of Mosteller´s was unconvinced that a six-sided die had a precise 1/6 chance of landing on any of its six sides, so he collected a bunch of (cheap) dice and tossed them a few thousand times to test his professor´s theory… Evidently, according to said (bored) student the numbers five and six appeared more frequently than the numbers one through four. Professor Mosteller´s unsurprising response was that the student had not tossed the dice enough times. ‘Rest assumed’, the student was told, the law of large numbers would ‘kick in’ and everything would (eventually) converge to 1/6. Undeterred, the student continued rolling a few thousand more times, but the fives and sixes were still showing up way too frequently. Something fishy was afoot. It turns out that the observed frequencies were not quite 1/6 because the holes bored into dice – to represent the numbers themselves – shift the centers of gravity toward the smaller numbers, which are opposite the numbers five and six. Ergo, the two highest numbers were observed with greater frequency.

https://cpes.org.uk/wp-content/uploads/2016/06/stefan_voss_paper.pdf

In other words, it takes a very great deal of work to undertake a randomized control experiment, as “control” is such a misleading term in the real world. Something uncontrolled/uncontrollable intervenes significantly between treatment and measurement.

This means that one reason why there haven’t been more nuclear accidents (given their complex and unpredictably interactive technologies) is not because “we haven’t waited long enough.” It’s more likely other intervening factors have been and are at work.

And one such reason is that the plants have been managed beyond their technologies. They are managed more reliably than theory predicts precisely because of the next failure ahead–that is, there are no guarantees and as such must be managed reliably instead.

28 thoughts on “Infrastructure and reliability: 4 examples where conventional wisdom falls short

Leave a comment