“What’s missing?” in this catastrophic earthquake scenario

I

–Start again: A system is more or less complex in terms of the number of components it has, the different functions each component serves, and the degree of interconnectivity (latent and manifest) between and among the number of components and their differing functions.

This social science definition isn’t the only one (cf. complexity is the ineffable beyond words), but the problem isn’t too many competing definitions. Rather, pundits talk about complexity as if everyone knew what it was, with some only later concluding “it has to be more complex than that!”

Two conclusions follow from this definition, it seems to me. Right off, the methodological imperative is not, “First, simplify!,” but, “First, differentiate!” What components? What functions? What interconnections? Equally important: Complex with respect to what? Complex under what conditions? Only in probing this way, I believe, can you come to understand what you have simplified in taking for granted.

If the point of departure is, “First, differentiate!,” then the chief methodological question to ask upfront is: “What am I missing that’s right in front of me to help with that differentiation?” If you can’t see what you’d be seeing if it were not for your blind-spots (cognitive, professional, other), how can you expect to find the complications that matter but aren’t visible, as if behind your head out-of-sight?

–To be clear, I mean “what’s missing right in front of me” literally. First, two examples from outside policy and management. What I’m missing right in front of me is coming to see in the lines from a George Meredith poem,

In tragic hints here see what evermore
Moves dark as yonder midnight ocean's force,
Thundering like ramping hosts of warrior horse,
To throw that faint thin line upon the shore!

that “horse” and “shore” function as anagrams, and then to ask: To what effect or difference does this make for my reading? (E.g., as if “ramping hosts of warrior” reversed into a “faint thin line”.) It’s also coming to see in the Hiroshige’s print,

that the waves of water and night-light are produced by the underlying grain of the woodblock, and then to ask: To what effect or difference does this make for my viewing? (E.g., as if the female image is walking out from the grain-waves behind.)

–The vast majority of us, of course, are inexperienced and untrained in reading for anagrams or seeing the technique of kimetsubushi at work. We must instead be distracted to take a second or further look. For the inexperienced, the way to be sidetracked or distracted is by surprise—in this case, the surprise of finding the grain-wave pattern on your own or an oddity in the “ocean’s force” being contraposed by “horse” to “shore.” Even if afterwards Meredith’s lines remain mediocre and Hiroshige’s print astounding still, overlooking the complexity is that simplification taken for granted which robs us of surprises that inform.

Note the most plausible reason for not seeing what is unseen—“Well, the reality is that it’s just not there at all!”—turns out to be least plausible when living in a complex world of many components, functions and interconnections. In that world, new connections can and are to be uncovered all the time where not-knowing, inexperience and difficulty are ever present.

II

Now an example of how what’s missing and the rest work in policy and management.

So as not to make this easy, consider what many call the most catastrophic natural disaster in the United States, were it to happen: a magnitude 9.0 earthquake in the 800-mile long off-shore Cascadia subduction zone in the Pacific Northwest. To make the impacts real, I focus on research and work at one specific site and its affected infrastructures: Portland, Oregon.

–A great deal of seismic attention and concern has been directed to the state’s Central Energy Infrastructure (CEI) hub in Portland. (The CEI hub is core to Oregon and not just the city.) It is “a six-mile stretch of the lower Willamette River where key liquid fuel and natural gas storage and transmission facilities and electricity transmission facilities are concentrated.” It is an area, however, subject to lateral spreading, ground shaking, and liquefaction, among other physical vulnerabilities (tsunamis, hazardous liquids explosions and fires, high voltage line collapse). Much of the infrastructure has not been brought up to seismic standards and instead was built with what are today very major deficiencies.

The CEI hub is in other words highly vulnerable were earthquakes to occur and it is recognized that “to minimize extensive direct earthquake damage, indirect losses, and possible ripple effects, substantial improvements to the critical energy infrastructure are necessary.” “We know the earthquake is coming. We know we have to take steps to address this,” policymakers and legislators admit and studies confirm.

Unsurprisingly, much of the attention has been directed to mitigating the severity of the vulnerabilities. New seismic standards have been brought into effect as have prohibitions on expanding CEI hub tank farms, better containment barriers have been studied, retrofitting is underway, automatic shut-off valves are being adopted, alternative supply chains and better emergency responses are actively modeled or prototyped, and proposals have been offered for increasing/relocating the storage capacities elsewhere and closer to communities affected. Not enough has been done, but it is important to recognize that a magnitude 9.0 earthquake would test any “built-to-last” scenario.

–At which point my thinking is distracted by a familiar quip that stirs to mind: Today, it’s easier to imagine an end to the world than an end to capitalism.

It’s easier to imagine a M9 earthquake scenario both obliterating an even better-hardened CEI hub and unleashing catastrophic fuel spills, fires, landslides, death and destruction than it is to get rid of these structures before it’s too late.

It’s easier to imagine that a Presidential Disaster Declaration would be immediately issued, that competent personnel would be identified and transferred into the states to take over from infrastructure staff who don’t show up because they are trying to save their families, that local people will only figure out what to do after they see what’s left to work with, and that interconnected infrastructures, just like the communities, would be islanded off from each other indefinitely—it’s easier to imagine that and imagine far worse than it is to get rid of the CEI hub and imagine ramifications of the alternatives.

–Which returns us to the earlier “Unsurprisingly”. Say again? “We know the earthquake is coming. We know we have to take steps to address this.” Huh?

In fact, think a bit more about what they don’t–can’t?–see right in front. What better way, save war and the plague, to bring the governments of in the Pacific Northwest to their collective knees than ‘‘solutions,’’ like those pre-disaster mitigations, because the existing economies are so taken for granted that their believers see no choice—no alternative—but to be catastrophic now on unprecedented scales? This “easier to imagine,” we are to believe, is taking complexity seriously.

What next on the earthquake mitigation agenda? Deep machine learning and better algorithmic management for preparedness? Autonomous robots to direct evacuations once the disaster has hit? But what about the trade-offs? And whose to say that the uncontrolled releases of wastewater and stormwater flows into the Willamette won’t mitigate the toxic fuel fires racing across it? Where is the governance structure to decide all this!

Principal sources

Oregon Seismic Safety Policy Advisory Commission (2013). The Oregon Resilience Plan: Reducing Risk and Improving Recovery for the Next Cascadia Earthquake and Tsunami. Report to the 77th Legislative Assembly. Salem, Oregon.

Ricks, C. (2021). “The anagram.” In his: Along Heroic Lines, Oxford University Press, Oxford UK: 19 – 55.

This blog entry updates the earlier “What am I missing?”

An ecosystem at the intersection of two schools of infrastructure studies

–“Infrastructures,” like “risk” and “trade-offs,” has become so ubiquitous a term we may wonder what purchase it still brings. In answer, I illustrate one of the differences and implications between two of the many schools of infrastructure studies out there: The first field focuses on the real-time physical and management side of infrastructure operations, the other on the socio-political-cultural consequences–short and longer term–of infrastructure placement and operations on their (non-)users.

I focus on one big difference, but the two fields admittedly overlap: Both look at equity consequences for users, though in different ways. There have been studies on the differing impacts of infrastructure failure on under-represented minorities (the electricity grid fails and communities of color are more so affected). Those studies are, however, few in comparison to the second field’s far greater attention to equity consequences of actual infrastructure operations on such communities.

–Here I focus on one insight of the socio-political-cultural studies: Large infrastructures are defined by their fragility. That is to say, they decay and are in need of constant, if not major, maintenance and repair. This central notion of infrastructure fragility, i.e., infrastructures are material forms of duration predicated in maintenance and repair, contrasts with the high reliability management of critical infrastructures about which my colleagues and I write.

In our field, critical infrastructures are defined by providing what society considers vital services safely and continuously in real time, even during (especially during) turbulent periods including those of physical and material neglect. While the need for infrastructure maintenance and repair can be read into the preceding, infrastructure decay and fragility are not as central.

–Here’s an example of how the difference matters by way of implications for California Delta ecosystems.

Start from the perspective of the infrastructure studies on the real-time large system management. Consensus has grown that the Delta cannot go on with “business as usual” in light of catastrophic risks associated with earthquakes, rising sea levels, and encroaching urbanization, to name but three threats.

Also, problems have long existed over how to price ecosystem services and the existence value of having one and only one CA Delta. Nor has there been substantive attention given to debt management, especially the indebtedness to be incurred via proposed investments in Delta infrastructure (water supplies, levee protection). To take a recent example, the proposal for massive water tunnel infrastructure (the “California WaterFix”) had capital cost estimates shorn of annual recurrent and operating expenses.

–Nowhere in the immediately preceding is the centrality of infrastructure decay, maintenance and repair. Were the latter central, different Delta problems would surface, not least of which is: If Delta infrastructures are by definition decaying even when succeeding in mandated real-time service provision, what can we expect to be happening to the water ecosystems from which they derive their services?

If fragile ecosystems (a term defined and popularized by ecologists) are being destroyed by otherwise fragile infrastructures, does this imply these ecosystems need to be maintained and repaired as part of infrastructure remediation? Would they not then be ecoinfrastructures?

–On the other hand, if ecosystems are not to be destroyed and in no need of repair and maintenance (though of course still fragile), what does it then mean to insist, as one water supply official put it, “we have to design infrastructure systems with full ecosystems planned in”? Doesn’t “planning in,” like “designing for,” center ecosystems also around “decay”?

If, though, your view is that ecosystems do decay over time, you may be closer to the second school of infrastructure studies than you think.

Principal sources

Chakalian, P., L. Kurtz and D. Hondula (2019). “After the lights go out: Household resilience to electrical grid failure following Hurricane Irma.” Natural Hazards Review 20(4).

Gupta, A. (2020). “Infrastructure as decay and the decay of infrastructure: Akhil Gupta reflects on maintenance, temporality and change.” Presentation delivered for the Danish Institute of International Studies on October 7 2020 (video accessed online on November 21 2021 at https://www.diis.dk/en/event/infrastructure-as-decay-and-the-decay-of-infrastructure

Nilsson, C. and G. Grelsson (1995). “The fragility of ecosystems: A review.” Journal of Applied Ecology 32(4): 677 – 692.

Ramakrishnan, K., K. O’Reilly, and J. Budds (2021). “The temporal fragility of infrastructure: Theorizing decay, maintenance, and repair.” EPE: Nature and Space 4(3): 674 – 695.

COP26 and intermittence

–One way major counternarratives come to us is by reversing conventional wisdom. More than a few Americans take pride that the US has not taken a direct hit during wartime since the early 1800s. Yet our nation’s inability to safely store nuclear waste from its weapons arsenal reveals the Cold War to be the first war in modern times where the US has indeed taken direct hits. This recasts one prevailing view of the Cold War, and I think to the better.

Reversals in conventional wisdom are frequent and—let’s admit it—easy to undertake. We can think of any number of policies whose actual implementation produced the opposite of what was initially planned. This underscores a key feature of policy narratives: They import their counternarratives. In semiotic terms, a story is marked by what it is not. (Which is louder: The seconds of astonished silence after the last note has sounded or the thunderous applause that follows?)

–But it’s more than “You say ‘x‘.” and “I counter with ‘not-x‘.” Candidate counternarratives thrive between the extremes of asserting x and not-x, when the issues are uncertain, complex, incomplete and disputed at the same time.

One group speaks of the preconditions for good governance as a free press, a secure judiciary, and respect for human rights. Complexity ensures you can rearrange the adjectives and replace some of the nouns: Another group tells us good governance lies in having secure property rights, respect for the law, and free markets. Thinking-in-threes is binary logic’s way of admitting things are more complex.

–The real value of complexity is that you can keep differentiating any three, even before adding a fourth and so on. Or from other directions: It’s always policy-relevant to say, “I’m sure I’ve left a very good deal out.”

How so? This admission helps when it shifts the question from the easier, “Why does this issue matter now?” (universal answer: politics, money and egos), to the more usefully difficult, “Matter with respect to what and under what conditions?”

An example from the past: Racing horses were once painted with both pairs of legs out-stretched; photography later showed the opposite. Better to say, I think, less that photos were correct and more that they pushed what had been visible to the human eye further. That way, we admit what we see and seeing itself are complex enough for further recasting. That way, we intermittently see horses gallop still as before.

–A recent example: COP26, the 26th United Nations Climate Change Conference, was for many (myself included) a clear failure to do the needful in limiting temperature rise. Let’s say that is true (at least up to that point).

Even then, the crux is not: “Thus,” alternative voices were left out and alternative politics side-lined. You can no more essentialize those voices and politics than you can essentialize the conference. For it first has to be asked: Which COP26 failed?

Such a conference is never altogether there, if only because those attending in Glasgow are being themselves in one venue while being other selves in other venues there. COP26 is and was riddled with this intermittence and who’s to say the earlier or later versions between October 31 and November 13 2021 are not its upside?

This intermittence (like surprise) carries with it a great deal of information. (These shifts have in the parlance, “high-level informativity.”) Which is to say: I’m sure I’ve left far too much out in stopping short at COP26 as an overall failure. To say anything less is hallucinatory precision.

Principal source

On this wording for intermittence from the work of Marcel Proust, see P. Weinstein (2019). “Soul-Error.” Raritan 38(4): 1-11.

Regulation, deconstructed and reconstructed

This entry consolidates points from earlier blogs on regulation.

I

It’s fairly easy to deconstruct a dominant discourse about government regulation in some quarters and identify its discrepancies. Reading the Financial Times in the run-up to and during the aftermath of the 2008 financial crisis, I learned that:

Post-crisis banking re-regulation adds significant costs to the economy and thus reduces growth, while the pre-crisis light-touch regulation undermined the very financial infrastructure necessary for economic growth.

What were indicators of positive economic growth under light-touch regulation—rapid uptake in home mortgages—were later indicators of regulatory failure. Mortgages were a relatively safe asset for banks to own, until they were the source of unimaginable losses.

Overregulation is nowhere better illustrated than in comparing the more recent nearly 2000 pages of the Dodd-Frank legislation to the less than 20 pages of the Depression’s Glass-Steagall Act—but under no circumstances are our regulators to repeat the 1930s! Anyway, those who lobby for simplifying regulation end up making it more complex.

It’s a bad thing for regulation to try to squeeze too much risk and complexity out of banking, especially when fresh risk reduction—less leverage, greater capital reserves—is itself too risky. Regulation discourages risk taking and only with risk taking do we have innovation, except when too much innovation and risk taking are encouraged as in the deregulated finance sector up to the 2008 crisis.

Regulators must always have the best information, even when those regulated—the banks and investment firms—haven’t a clue as to their current real-time positions. Regulators will never have the capacity to know the real-time positions of actual firms, except in those cases where firms, like Lehman, insisted regulators did have the real-time information.

Global business and supply chains are great, except when the firms are too big to fail or regulate. Country defaults are horrible, except where they wore as in Argentina or Mexico.

Global markets are a permanent fact of life, but we must never suppose that the drive to regulate them for the better is just as permanent. Markets are best at price discovery, except where market efficiencies are realized because of lack of transparent discovery, as in unregulated dark pools.

II

In short, I learned from the Financial Times that government regulation is in crisis because of the shambolic expectations about government regulation. (A friend suggests I should have been reading the Guardian.)

However, look closer at the discrepancies.

Note how the Financial Times discourse disaggregates into specific example and general proposition. At the micro-level, indicators, defaults and legislation are to be contrasted case by case; at the macro-level, “Do X” entails somebody else’s “Do not X” (as if semiotically a proposition is defined against the backdrop of what it is not, including counter-propositions).

The X/not-X pairings are reminiscent of the problem adduced long ago by Herbert Simon in his The Proverbs of Administration: “they almost always occur in mutually contradictory pairs. ‘Look before you leap!’—but ‘He who hesitates is lost.’” “For almost every principle one can find an equally plausible and acceptable contradictory principle. Although the two principles of the pair will lead to exactly opposite organizational recommendations, there is nothing in the theory to indicate which is the proper one to apply” (my italics).

Why? Not surprisingly, Simon comes back to fact that administration and organization—and here we add, government regulation—are so complex as to ensure no one categorical proposition or single exemplar dominates important cases at all the important times. The entire point of complexity is there is no one theory for deciding.

What to do? How can we more usefully describe government regulation and its various scenarios and specifics, without dead-ending in contradictions?

Here too the methodological imperative is: First, differentiate! “Government regulation” is too abstract a category for the preceding questions. In what follows, I focus specifically on one subcategory, that of regulation of critical infrastructures by their regulators of record.

III

Shifting to infrastructure regulation helps identify and describe a meso-level between macro-design principles for such regulation and the micro-cases of actual regulation. This is to say the subcategory, “infrastructural regulation,” is only partly described by macro-propositions and micro-examples.

This meso-level is the space of decisionmaking competence bounded by better systemwide practices that emerge across a variety of site-specific examples and by the what-if scenarios that adapt macro-principles to local conditions that vary case by case. The knowledge embodied in systemwide better practices and in locally-specific design applications adds to the knowledge represented by macro-principles on their own and by micro-cases tacitly understood.

These different knowledge bases differentiate infrastructure regulation further and in so doing we begin to reconstruct what regulation—at least infrastructure regulation—is all about. For our purposes there, this reconstruction has three features that recast infrastructure regulation significantly:

  • To the extent that infrastructure regulation is equated with what their regulators of record do, we will have a very fragmented understanding of how regulation functions for critical infrastructures. Infrastructure regulation is not just what the regulators do; it is also what the infrastructures do in ways that their regulator of record could never do on its own. This includes infrastructure control rooms correcting for regulatory deficiencies and gaps in real time by the application of these better systemwide practices and what-if contingency scenarios, none of which need have been formalized or made official.
  • In related fashion, there is not just the risk of regulatory non-compliance by the infrastructure, there is also the infrastructure’s risk of compliance with incomplete or otherwise defective regulations promulgated by the regulator of record. Note that the criticality of time from discovery to correction of defect reinforces a process of dispersed regulatory functions: A shorter time to discovery has the advantage of discovering errors that would have propagated into much larger ones if left uncorrected.
  • The upshot for the regulator of record?

• If, notwithstanding the recognition that regulation is understandably dispersed, government still insists that the regulator’s task is one of regulating the whole cycle of the infrastructure throughout its operational stages of normal, disrupted, failed, and recovered onwards, then it is better to say that at best the regulator of record is in permanent setback management. At worst, its own activities require the coping behavior we associate with emergency management during crises, i.e., its setbacks are primarily negative.

• This is to say that the regulator in terms of its own cycle of reliable operations never recovers fully; or if you’re on the optimistic side, recovery is its new normal for the regulator of record. We know from research, however, that the probability of failing to be reliable is even higher in recovery. To assign more and more duties and responsibilities to the regulator of record under the insistence that it regulates the whole cycle of the infrastructure is to walk hand-in-hand with ignorance on one side and magic on the other.

IV

So much for downsides. How in contrast is the insistence on a dispersed nature of infrastructure regulation useful for rethinking the wider issue of “infrastructure regulation”?

Textbooks are fond of repeating that government has allocative, distributive, regulatory and stabilizing functions. But society’s critical infrastructures have their own allocative, distributive regulatory, and stabilizing functions. Of course, government can be seen as prioritizing a number of social values (healthcare, education,. . .) and relying on critical infrastructures to realize them. Yet these infrastructures–many of them privately owned or managed–prioritize the societal values of systemwide reliability and safety, in particular, of their respective critical services.

The challenge is to better understand the institutional niche of critical infrastructures, that is, how infrastructures themselves function in allocating, distributing, regulating and stabilizing that reliability and safety apart from, if not independently of, the respective government regulators of record. That this knowledge will always be for regulators partial and punctured by gaps and ignorance should go without saying.

Mercator’s projection

For who makes rainbows by invention?
And many standing round a waterfall 
See one bow each, yet not the same to all, 
But each a hand’s breadth further than the next.   
                             Gerard Manley Hopkins

What good is trial and error learning when a system’s massive error means no trials possible thereafter? You do not want to push an infrastructure’s control operators into prolonged unstudied conditions and then wonder why they aren’t reliable.

Some think otherwise. “First off,” the project designer tells us, “I’m always working in unstudied conditions. Every major project, I’ve got to make all manner of assumptions.” I counter: The challenge of project designers is to find out what are the better practices for starting off complex project designs. Here I mean the really-existing practices that have emerged and been modified over a run of different cases and shown to be more effective for design implementation.

“But how can a field or discipline grow if it doesn’t move into unstudied conditions by doing something the first time…” This response is often stated as if it were established fact. Here too better practices are to be first searched for. Or where they aren’t found, then, yes, systemwide innovation should not be undertaken if it reduces options, increases task volatility, and diminishes maneuverability in real-time complex system operations.

“But, there always has to be someone who does something for the very first first-time, right?”

At this point, Dutch bluntness is called for: “The burden of proof is on you to demonstrate this is indeed the very first time.” This is a planet of 7 plus billion, after all.

“But still,” our friends, the economists, press: “What about the pivotal role of innovation in the economy!” Well, yes, but so too are the infrastructures upon which the innovation economy depend. To treat innovation as more important than the infrastructures (without whose reliability there wouldn’t be most innovations) risks Mercator’s projection: It distorts by over-enlarging the already large.

Innovationists don’t see it that way. The risks they take end up the price few of the rest of us ever thought we’d have to pay.

Recasting Roosevelt’s New Deal

Then he gets up, firmly
shuts the door, and--quietly, 
into the ear:
‘In 1921 or 22, 
 Gorky admitted to me sadly
 what Lenin had told him in strict confidence:
 “The experiment has failed.”’
 Grateful to Shkolvsky
 for placing such trust in me,
 I bowed to him in silence.
 We both remained silent.
 Without a word, he raised 
 one finger to his mouth
 and looked at me sternly. 
                  “Victor Borisovich Shklovsky” by Lev Ozerov, translated by Boris Dralyuk

There is the view that the unknown-unknowns of the 1930’s Depression in the US caused such widespread dread and fear that large-scale social experimentation, like the Social Security program, became possible (the Roosevelt administration’s “New Deal”)

I suggest what looks like large-scale experimentation in the midst of unknown unknowns was, in part, policymakers probing a set of known unknowns (i.e., known uncertainties).

–Why does this matter?

Because the primary fears said to have prompted the New Deal would have produced in control room operators of society’s core infrastructures (1) the avoidance of systemwide experimentation in the midst of unknown unknowns by means of choosing (2) to operate the system in the midst of uncertainties about probabilities or consequences they knew something about and about which they could live with.

–It’s been argued that one great fear giving rise to the New Deal revolved around deep worries about whether the leading liberal democracies could compete successfully with totalitarian dictatorships. Bluntly: Does resorting to presidential emergency declarations in dire times still leave us a liberal democracy or tip us well into rule by dictatorship?

Yet emergencies were far from unchartered in the US, and not just in terms of Abraham Lincoln’s executive actions during the US Civil War. The earlier Federalists were also worried about emergencies, and the accommodation they made was that, yes, presidential emergency powers may be needed in extraordinary times, but these would not serve as precedent for governance thereafter.

–From this vantage point, the New Deal looks like managing against having to experiment in unknown unknowns by choosing among known uncertainties to be put up with though disliked. Choosing among known uncertainties buffered against having to cope with the more radical unknowability experienced in other countries.


Principal sources

C. Fatovic, (2009). Outside the Law: Emergency and Executive Power. Baltimore:
Johns Hopkins University Press.

I. Katznelson (2013). Fear Itself: The New Deal and the Origins of Our Time. Liveright Publishing Corporation. W.W. Norton & Company: New York, NY.

Reliably good enough?

Under what conditions is good enough a reliably good-enough?

I

–The best-known gloss on good enough must be that of D.W. Winnicott, the psychotherapist, when describing the good-enough mother. The good-enough mother is not perfect, and that is a Very Good Thing. At baby’s birth and for a period thereafter, the good-enough mother is one who manages to be there when child needs mother. So available and in sync with the child’s needs is mother that the child at some point feels it created mother–indeed, created the perfect mother. Over time, the real mother—and this is where her “good enough” comes in—disillusions the child that “mother” is its very own creation.

Winnicott describes what the good-enough mother does as “management,” “provision” and “reliability.” One of his descriptions illustrates the point for a specialist audience of his:

One cannot help becoming a parent-figure whenever one is doing anything professionally reliable. You are nearly all, I expect engaged in some sort of professionally reliable thing, and in that limited area you behave much better than you do at home, and your clients depend on you and get to lean on you.

–The reliability professionals in large critical infrastructures face the dilemma of good-enough parents: How do they disabuse us, the consumers of water, electricity and other vital services, that our being better off is now more up to us than before and in ways we really haven’t yet appreciated?

How to reinforce in us that the declines in services underway aren’t “declines” any more than is the reality-check that we did not create mother on our own? If the control room operators we interviewed are representative, reliability professionals are the last people to persuade us out of our fever dreams. They think we’re adults.

II

. . .keeping a sharp outlook---
here's where free choice vanished, here rights, here the 
real meaning of the word---(you choose)---consequence, capital, commodity, con-
sumption. Community? Come here says time. Just try to 
find it, the here. Such a good game to keep you
occupied for now. The rest of now.                                                       Jorie Graham ("Exchange")

–What, though, to say to those who argue good enough is not possible, here?

What if there are fewer and fewer routine responses to more and more shocks, less and less capacity to think ahead or be resilient afterwards? All this moreover happening in a world where problems are unprecedented, where competent leaders are not to be found, and where regulators are even more ill-equipped?

If so, the answers are crystal-clear: Our infrastructures are not headed for catastrophe. They—and we—are in the middle of that disaster, unfolding right before our eyes.

–But is that actually happening?

Are they physically crashing right before your eyes?

Yes, the pressures on critical systems are real and threatening; but, there is a world of difference between having no guarantees for future reliability and insisting, like Jorie Graham, that the failed future is here for the rest of now.

III

But there’s something very odd about that failure assigned to capital, commodity and consumption. We’re to suppose that their systems of production fail notwithstanding that, in reality, their technologies are constantly replaced before they have had time to physically collapse and permanently fail. No guarantees of course, but still: more often than not, at least in digital-reliant systems.

Better not to dwell on that, it seems, and write articles with titles like, “Getting the social cost of carbon right.” As if we have that kind of control where to fail big-time is defined as pre-empting failures nearly as big. As if this were even good enough!

Changing risk and changing safety are different!

Many good things must be said about the current literature on risk and safety, but here I introduce four provocations. To telegraph ahead, assumptions in this literature must be surfaced and questioned more closely.
  1. Risk and safety are causally related?
Risk and safety overlap as terms in ordinary language. Some seek to formalize the purported relationships—e.g., increasing safety barriers reduces risk of component or system failure. In contrast, I come from a field, policy analysis and management, that treats safety and risk to be very different. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two in his work. The reasons are many for not assuming that “reduce risks and you increase safety” or “increase safety and you reduce risks.” For example:

However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.“ (Danner and Schulman, 2018)

Once again, the differences come with the failure scenarios—risks with respect to this set of granularities as distinct from safety with respect to that set.
  1. Interdisciplinary focus?
It’s de rigueur to call for more interdisciplinary research on risk and safety management in large socio-technical systems, including critical infrastructures. Yet such calls not only must surmount the standard-normal qualitative v. quantitative, reductionist v. holistic, and positivist v. post-positivist “divides.” They must also address these differences: “risk management versus safety management” denotes not only regulatory, political, and psychological differences (as in the above quote), but also societal, economic, historical, sociological, and cultural differences. And why stop there, when other differences (neurological?) matter case by case? I’ve never read a call, routine as they are, for interdisciplinarity granular enough to tell how to do this.
  1. Control risk? Control safety?
Policy analysis and management insist Very-Big-Differences between “control” and “manage,” as in when: We have to manage risk or safety because we cannot control them. In ordinary language, it is common enough to conflate “manage” and “control.” That will not do for policy and management complexity. Control is when the system’s input variance, process variance and output variance are rendered low and stable. Think of the nuclear reactor plant: guns, guards and gates are used to ensure outside inputs are controlled; processes within the nuclear station are highly regulated by government to ensure few or no mistakes are made (operations and procedures that have not been analyzed beforehand are not permissible); and the output of the plant – its electricity – is kept constant, with regulated low variance (nuclear power is often considered “baseload,” on top of which are added other types of electricity generation). One crux of the Anthropocene is that critical systems having low input variance/low process variance/low output variance are fewer and fewer because of increasing political, economic, social and etcetera unpredictabilities. It is the case that electricity generation sources—and very important ones—now face high and higher input variability. Think of climate change, citizen and consumer unrest, regulatory failures and other external impacts on the inputs to energy production. Such have posed the challenge of managing what can no longer be controlled (if ever it was). In response, operational processes inside a good number of power plants have had to become more varied (this reflecting the so-called law of requisite variety), with more options and strategies to process and produce what still must be a low-variance output: namely, electricity at a regulated frequency and voltage. It’s hard to believe, in other words, that all the talk about, say, artificial intelligence (AI) “controlling” behavior will not need to be far more nuanced and differentiated, when it comes to really-existing policy and management implications. One example will have to suffice from underwater oil and gas exploration. Alarms produced by autonomous systems can and do often turn out to be false alarms occurring under already turbulent task conditions at sea. Indeed, operating at a higher level of autonomy and having to cope with indiscriminate false alarms may no longer permit the real-time operators to revert, just-in-time, to lower levels of autonomy, e.g., managing via more manual operations, as and when nothing else works.
  1. Building to macro systems from micro data or micro foundations?
This integrated-systems approach has been called into question across a variety of academic fields, e.g., the repeatedly disappointed efforts in building up macroeconomic models from separate subsystem models or in grounding macroeconomics in micro-economics. It’s been said that no economist in his or her right mind would ever rely on the micro-foundational Dynamic Stochastic General Equilibrium (DSGE) models of the economy developed and tinkered with over decades. (See also the disappointing history of “lifecycle modeling” for threatened species.) Note, though, this is not equivalent to the complaint against the reductionism of First, simplify! We all simplify; we don’t all then go about trying to integrate the simplifications or assume a simplified discussion of system components—start with the worker!, the task!, or that AI software!—is the right place to begin discussion of complex systems in actual operations. I’m not sure that even those detailed analyses revolving around the labor augmenting rather than substituting nature of AI software recognize that humans are the only “integrated comprehensive model” we will have for some time to come. Principal sources Danner, C., and P. Schulman (2019). “Rethinking risk assessment for public utility safety regulation.” Risk Analysis 39(5): 1044-1059. Roe, E. (2020). “Control, Manage or Cope? A Politics for Risks, Uncertainties and Unknown-Unknowns.” Chapter 5 in The Politics of Uncertainty: Changes of Transformation (eds. Ian Scoones and Andy Stirling). Routledge, UK. Schor, J.B. (2021). “Dependence and heterogeneity in the platform labor force.” A policy brief for the Governing Work in the Digital Age. Hertie School, Berlin. Storm, S. (2021) “Cordon of Conformity: Why DSGE models are not the future of macroeconomics.” International Journal of Political Economy 50(2): 77-98 (DOI: 10.1080/08911916.2021.1929582). Utne, I.B., I. Schjølberg, and E. Roe (2019). “High reliability management and control operator risks in autonomous marine systems and operations.” Ocean Engineering 171(1): 399-416. Earlier blog entries: “Recasting the policy narrative of labor-substituting technological change” and “Lifecycle modeling of species”

Siding with the wall

On one side. I read a lot because I’d like to think the answer is out there, ready to be stumbled over, because someone smarter has seen it already. More than that, when found, I’d realize that Piece-of-Truth had been right in front of me–the writing on the wall.

On the other. Why does sustained analysis often deepen, rather than dispel, complexity? Answer: It’s less the “analysis” than the “sustained” we call explication. This drive to explicate—to explain so as to explain more and then even more—has been criticized by the wildly different Peter Sloterdijk, philosopher, and Shirley Hazzard, novelist. Further, the more we explicate, the more we feel compelled to name the now more complicated. Surely, the brain must be hard-wired for all this.

Which side? A while back I culled old journal issues that I’d been saving. Partly to see what I had commented on then by way of marginalia, but also to see if what I had read pointed to what I think now. My scribbles were unreadable.

The policy analysis toolkit and complexity

The graduate programs in policy analysis with which I was familiar had their master’s degree grounded in a core curriculum, with courses in: the politics of public policy; use and role of microeconomics in policy analysis; research methods including statistics; and course work on implementation, public management, or the law, among others. Call this, the toolkit (I know, I know, rebarbative to some). In practice, the core curricula varied and could also cover public finance, ethics, program evaluation, qualitative methods, and GIS, to name a few.

–At no point in my graduate training or career do I remember being told that a policy problem not amenable to the toolkit was intractable. The toolkit had space for new methods and approaches. Narrative analyses and triangulation via different methods and analytics were there as well in my practice.

The toolkit got smaller, however. Perhaps public policy analysis was not as interdisciplinary as professed at the get-go. Certainly, econo-speak and p-values took over the pages of house journals like the Journal of Public Policy Analysis and Management (with an impact factor rounding off to, now what, 4.714).

–In another sense, the toolkit was never really interdisciplinary enough to attract decisionmaker attention. Actual policy analyses might as well be the proverbial message in the bottle tossed out onto turbulent seas in hope that someone, someday, sooner preferably than later but no guarantees ever, grabs the bottle and treats its message seriously. This, however, is not for want of having tried to get their attention.

Over my policy analysis and management career, I witnessed the 20-page policy brief reduced to the five- page memo into a fifteen-minute PowerPoint presentation into the three-minute elevator speech into the tweet. Along the way came more and more graphics, and then memes, as in a picture is worth a thousand words. This, they said, was keeping it simple. What next on the syllabus: Telepathy? “The knowing look” in 10 seconds or less?