The figure shows that the optimal point corresponds to the intersection where the marginal cost of environmental adaptation equals the marginal cost of environemntal damages:
𝑀𝐶adaptation = 𝑀𝐶damages
meaning that further investment would no longer provide a net benefit.
The apparent meaning [of an event] conceals another [event], as in that famous French warning on level crossings, “Un train peut en cacher un autre” [One train can hide another]. Isaac, about to be sacrificed by his father, comes to stand for Jesus, who died to save humankind. . . .The figure forms in the present moment of the text under one’s eyes, but the latent meanings emerge both in the past and look forward to the future: the narrowly averted sacrifice of Isaac prefigures the death of Jesus, for example. “How much more fulfilling is the new idea that pre-Christian times can be read as a shadowy figure of what actually was to come?” writes [the literary critic, Edward] Said. . .
At the actual train crossing, the implications of Warner’s analogy complicate the point she is making. What is invisible but then emerges, its own other rushing train, is disaster. Whatever else taking Isaac to prefigure Jesus is, it reveals real and present danger.
To see how and with what effect, turn to the commonplace: If manifest risk is where the probability of disaster (Pf) and the consequences of failure (Cf) are known or estimated, “latent risk” is when uncertainty over Pf or over Cf exists. Strictly speaking then, the challenge of trying to manage ahead latency is one of uncertainty management, not risk management as formally understood.
So what? For one thing when it comes to a major disaster, pre-existing latencies are joined at the hip with subsequent improvisations. Latent uncertainties unmanaged beforehand, particularly those that are invisible, necessitate improvisations in immediate emergency response afterwards.
You jump the gun by approaching the train tracks too early as the carriages disappear rapidly in front of you; how you survive, if at all, depends on pulling whatever good mess you can from the bad mess in which you find yourself. Surviving means improvising without any guarantees in riding out that uncertainty, whatever the lessons to be learned afterwards for “better risk management” at rail crossings.
Again, so what? “How much more fulfilling is the new idea that those old dangers and iffy responses can be read as a shadowy figure of the coming lessons learned for risk management.” Or to bring the point back and thereby complicate Warner’s point: Yes, it is easy in hindsight to see how Issac could be said to prefigure Jesus, though the connection being made between Isaac and Jesus is more improvisational given the high stakes involved than one might first want to admit.
Not false (apophenic) or even contingent as much as readymade for the immediate moment at hand.
–I attended a presentation on an ecosystem restoration project in Montana. One project leader described what was for him the key contribution: The approach gave him a way to integrate the small and large scale.
Not only could the project leader stand in his woodlot and see how it fit in with the larger scale of the ecosystem and landscape, he was able to plan at the smaller scale for the longer term. He was able to aspire to a future for his forested lot.
–So too for the ecosystem manager standing at the ridge overlooking the valley. She looked down and was able to plan at the larger scale for the shorter term. She could now see what the next steps ahead were when it came to managing the larger ecosystem.
“Think globally, act locally” had become “Think long term from the small scale and in real time for the large scale.”
Set to music by Frederic Rzewski (JEV-skee), the first part of his Coming Together is based on text from a letter of Sam Melville, anti-war protester and convicted bomber, who was incarcerated at Attica. He was shot and killed in the 1971 Attica prison uprising.
(You may have to pull the play bar back fully to the left; no proprietary claim is made to this link or material)
Part Two, ‘Attica,’ uses the reply of another uprising leader, Richard X. Clark, just after being release. Asked how it felt leaving Attica behind, he said: “Attica is in front of me.” Rzewski draws for me hope to and from those words.
Principal source
Grégory Salle (2018). “Escaping from Hell Is a Right!”: The Case of France’s ‘Q.H.S.’ (1975–1982).” Chapter 7 In: Prison Breaks–Toward a Sociology of Escape (eds. T.M. Martin and G. Chantraine), Springer eBook (https://link.springer.com/book/10.1007/978-3-319-64358-8).
First, there is the predictably unimaginable that comes with the new. Think here of “violent crime” as a legal category in the US that didn’t exist prior to the 1970s. “Speaking of ‘political prisoners’ had become such a major political criticism that it was no longer possible to imagine it as a legal category,” concludes another. That new categories and conventions are ahead that we don’t now imagine is quite predictable.
Second, that there are analogies to redescribe current policy problems is also predictable. The Green New Deal has most often been likened to Roosevelt’s New Deal. It’s also been likened to the Civil Rights Movement, 19th century abolitionism, and the war economy of the Bolshevik Revolution. There should be no doubt that the climate emergency has been or will be compared to many other events you and I won’t imagine until that comparison is made.
Third, earthquakes with unimaginable impacts are predicted all the time. That in fact is the genre convention. It’s no different than predicting that experience after my death will be the same as experience before my being conceived.
Thinking differently about implementation scenarios
The authors of a fine report concluded that significant gaps exist between what was proposed in the EU AI Act (concerning artificial intelligence) and the then existing EU digital legislation (formally “the digital acquis”):
We identify eight key areas where challenges may emerge, and make the following policy recommendations: . . .
In my view, the first question we ask is not, “Who’s going to adopt the recommendations and, if so, with what modifications?” but rather: “Who would implement the finalized recommendations and what are implementors’ scenarios for failing to do so?” This acknowledges the longstanding role of implementation as de facto policymaking.
Thinking differently about pre-disaster mitigations
Retrofitting a bridge pre-disaster isn’t a chancy wager on what might or might not happen to the bridge. Retrofitting is managing latent interconnectivities between bridges and other infrastructures that become manifest during and immediately after the disaster. That inter-infrastructural connections will shift and these shifts will involve bridges is far more predictable than this or that bridge will fail, unless retrofitted.
This means attention is crucial to the track record in retrofitting bridges before and after disasters, here and elsewhere. Note the implication: Retrofitting has to occur in order to have a track record to monitor and learn from.
Since there are real material and cognitive limits on controlling inter-infrastructural connectivity at any point in time, doing more by way of managing the pre-disaster latency of interconnectivities is elemental. Bringing water systems up to power’s better practices is a way of managing latent interconnectivity in advance of disaster, one interviewee told us.
Thinking differently about risk management for crises
I
What to make of the debacle of Sam Bank-Friedman’s cryptocurrency firm?
“I wasn’t even trying, like, I wasn’t spending any time or effort trying to manage risk on FTX,” Mr. Bankman-Fried said. Echoed a co-head of digital asset trading in Citigroup about FTX, “The thing that I picked up on immediately that was causing us heartburn was the complete lack of a risk-management framework that they could articulate in any meaningful way.”
Before it was the wrong framework for managing risks; now the problem is having no framework at all. But how could FTX not have risk managers, albeit of sorts and not formalized?
II
In answer, let’s recast the issue. Risk and risk managers were around long before risk management frameworks and registries had been formalized. Think Christians being around from the time of Jesus to the time of formalizing the Scriptures in 4th century AD at the Council of Nicaea, How did Christians operate in the 300 years between? Can we think of Bank-Friedman and his FTX colleagues (and other cult-entrepreneurs) in the same way as these early Christians?
No wonder the guardians of current frameworks might want to convince us the FTX debacle has nothing to offer by way of lessons learned.
Thinking differently about crisis leadership
The literature on crisis leadership is largely top down (leaders direct) or bottom up (self-organizing crisis response), where networks are said to be vertical (hierarchical) or horizontal (laterally interacting).
We add a third category: control rooms, and not just in terms of Incident Command Centers during the emergency but already-existing infrastructure control rooms whose staff continue to operate during the emergency.
Paul Schulman and I argue control rooms are a unique organizational formation meriting society protection, even during (especially during) continued turbulence. They have evolved to take hard systemwide decisions under difficult conditions that require a decision, now. Adding this third is to insist on real-time large-system management as the prevention of major failures and thus crises that would have happened had not control room managers, operators and support staff prevented them.
Thinking differently about predictions
As I remember the to-ing and fro-ing over the introduction of Bt cotton in India, saving on insecticides was the putative plus and runaway GM crops the putative negative. I know nothing about the subsequent record but suspect that actual findings must have been differentiated, as any such findings, by region and other demographics.
All this came back to me when I read the following passage describing a conference paper on Bt cotton:
Ambarish Karamchedu presented on Dried up Bt cotton narratives: climate, debt and distressed livelihoods in semi-arid smallholder India. Proponents of this ‘technical fix’ position GMO crops as a triple win. India has semi-arid and arid areas where rural poverty is concentrated, with an intense monsoon season (3-4 months), making farming a challenge. BT cotton introduced around 1995, thrives here. India is the biggest cotton cultivator and Bt cotton is grown by 7 million smallholder farmers, 66 percent in semi-arid areas with poor soils and low rainfall prone to monsoon. In Telangana, 65% of farmers across all classes produce BT cotton, with good harvests for 5 years, after which they decline. Failure of farmers who face increased input prices have to resort to non-farm incomes. The triple win technological fix narrative perpetuates and exacerbates the problems it seeks to solve, and benefits farmer institutions rather than enriching farmer knowledge and practice.
It’s that “with good harvests for 5 years, after which they decline” that grabbed my attention. Did anyone predict that for Bt cotton?
This matters, because in the absence of any such prediction, why not also conclude: “Well, five years is five years more than the critics expected?”
Thinking differently about luck in infrastructure crises
Ensuring systemwide service reliability has always involves a measure of luck in major critical infrastructures. Control room operators will tell you this. At its most abstract, good luck can be defined as the non-occurrence of system failure in the absence of exercising failure avoidance options, while bad luck is the occurrence of failure in the presence of exercising those options.
But luck also favors the well-prepared, and well-prepared operators make a difference. Consider how a senior operations engineer for a transmission grid described a close call to us:
. . . We nearly caused a voltage collapse all over the western grid. Everything was going up and down, we were trying to get power from all the nuclear units in the western grid. Life flashed before our eyes. And then the gen dispatcher did intuitively the right thing. He said, Shut one pump down. How he figured that, I still don’t understand. It was something we had never seen before. We had no procedures. . .We went back and looked at it, and the planner said, Oh yeah, you should never have been running three pumps, and we said, Where did you get that from? So we started writing new procedures.
When talent meets opportunity, the value added by professionals is stopping close calls and near misses from tripping into system failures. That there can be no guarantees makes it luck.
1. From a high reliability management perspective, regulation for safety in large socio-technical systems is dispersed. The regulation of critical infrastructures for system safety is not just what the regulators do; it is also what the infrastructures do in ways that their regulator of record can’t do on its own. Those who have the real-time information must fulfill regulatory functions with respect to system safety that the official regulator is not able to fulfill.
2. The dispersed functions of regulations for system safety put a premium on understanding real-time practices of control room operators and field staff in these large systems. Safety, if it is anything, is found in practices-as-undertaken, i.e., “it’s operating safely.” This means safety is best understood more as an adverb, not as a noun. You can no more make safety than you can make fish from fish soup.
3. It makes little sense then for critics to conclude that regulators are failing because formal regulations are not being complied with, if the infrastructures are managing in a highly reliable fashion and would not be doing so if they followed those regulations to the letter. In practical terms, this means there is not just the risk of regulatory non-compliance by the infrastructure, there is also the infrastructure’s risk of compliance with incomplete regulations.
4. Another way to put such examples is that, when it comes to managing safely, there is a major difference between error avoidance and risk management. Not taking advantage of opportunities to improvise and communicate laterally is a known error to avoid in immediate emergency response. Unlike risks to be managed more or less, these errors are to be avoided categorically, yes or no. What is most important about error avoidance is missing those real opportunities that shouldn’t or can’t be missed where the logic, clarity and urgency of “this is or is not responding safely” are evident.
5. If points 1 – 4 hold, the challenge then is to better understand the institutional niche of critical infrastructures, that is, how infrastructures themselves function in allocating, distributing, regulating and stabilizing system safety (and reliability) apart from the respective government regulators of record.
6. With that in mind, turn now to the relationship between system risk and system safety, specifically: regulating risk in order to ensure system safety. For some, the relationship is explicit, e.g., increasing safety barriers reduces risk of component or system failure.
In contrast, I come from a field, policy analysis and management, that assumes safety and risk are to be treated differently, unless otherwise shown in the case at hand. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two. The reasons are many for not assuming that “reduce risks and you increase safety” or “increase safety and you reduce risks.” In particular:
However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.“
Danner and Schulman, 2019
7. So what?
That informed people continue to stay in earthquake zones and sail in stormy seas even if they can move away from both tells you something about their preferences for system safety, let alone personal safety. For it is often safety with respect to the known unknowns of where they live and work versus safety with respect to unknown-unknowns of “getting away.” Unknowns, not risks.
Let’s shift gears to a different example and extension.
On March 26, 2024, about 0129 eastern daylight time, the 984-foot-long Singapore-flagged cargo vessel (containership) Dali was transiting out of Baltimore Harbor in Baltimore, Maryland, when it experienced a loss of electrical power and propulsion and struck Pier 17, the southern pier that supported the central span of the continuous through-truss of the Francis Scott Key Bridge. A portion of the bridge subsequently collapsed into the river, and portions of the pier, deck, and truss spans collapsed onto the vessel’s forward deck (see figure 1). . .
. . . The Key Bridge’s pier protection was struck in 1980 when the 390-foot-long Japan-flagged containership Blue Nagoya, which had a displacement or weight about one-tenth that of the Dali, collided with Pier 17 following a loss of steering about 600 yards from the bridge; see figure 4 for a size comparison of the Blue Nagoya to the Dali. . .
Even I was taken aback by the two figures, and I study this stuff! Just look at the differences in containership sizes and you’d think even more disasters must lie in waiting wherever such infrastructures have not grown in size and scope relative to the demands placed on them.
Now, of course, there are those who would blame my perceptions on all those distorting cognitive biases–anchoring, salience, selection–as if they were trained incapacities on my part. But people, we’ve learned to worry about problems where physical capacity of infrastructures do not grow with their physical demand!
Even though true, that point doesn’t go far enough. The more important point is the empirical insight from the high reliability literature: A complex sociotechnical system is reliable only until its next failure. That is, we need to know more about how the current system is managed in real time beyond its technology and design in order to avoid failures. How managing safely means more than regulating for safety.
This difference may well be more than “What are to me very, very narrow safety buffers are to them manageable ones.” I haven’t studied harbor pilots, but the infrastructure operators we studied distinguish between uncertainties to be managed and unstudied conditions in which not to operate. Operators we’ve talked with call this their “comfort zone,” though as one control room supervisor hastened to add, “I’m always uncomfortable.” How so is illustrated in the following table:
High reliability management professionals we study practice vigilance to stay out of the red area below the diagonal and stay within the area above it—a stylized version of their comfort zone. To maintain this level of comfort they tolerate some uncertainty about outcomes (Cf) matched by having high confidence in some probabilities (Pf). They also tolerate some uncertainty about probabilities by having higher confidence that consequences are limited. Management within these uncertainties is in either case supported by team situation awareness in the control room. In other words, the professionals seek to avoid unknown unknowns by extending but limiting their management to known unknowns—uncertainties with respect to outcomes and probabilities they can tolerate as part of their comfort zone as managers of operational risks and safety (Roe and Schulman 2018).
This is a very important distinction for managing safely in other critical infrastructures. Are there such reliability professionals when it comes, say, to “AI safety” (more formally, when it comes to the adverbial properties of performing safely or not)?
Complex sociotechnical systems have by definition complex technical cores, about which real-time operators do not have full and complete causal knowledge. So too by extension opaque AI algorithms are a concern, but not a new concern. Unstudied and unstudiable conditions have always been an issue under mandates for the safe and continuous provision of a critical infrastructure’s service in the face of various and variable task environments. The key issue then is: What uncertainties with respect to probabilities, uncertainties and consequences are they managing for when it comes to “AI safety” so as to avoid operating (acting, performing) ignorantly? Or more formally, when does avoiding error in real time require more than regulating for the management of risks?
Sources
Danner, C., and P. Schulman (2019). “Rethinking risk assessment for public utility safety regulation.” Risk Analysis 39(5): 1044-1059.
E. Roe and P.R. Schulman (2008). High Reliability Management. Stanford University Press, Stanford CA.
————————- (2018). “A reliability & risk framework for the assessment and management of system risks in critical infrastructures with central control rooms. Safety Science 110 (Part C): 80-88.
To my knowledge, philosophers Gilbert Ryle and Michael Oakeshott, are among those who first discussed the importance of recasting “thinking” and “behavior” in terms of adverbs rather than as common nouns.
On March 26, 2024, about 0129 eastern daylight time, the 984-foot-long Singapore-flagged cargo vessel (containership) Dali was transiting out of Baltimore Harbor in Baltimore, Maryland, when it experienced a loss of electrical power and propulsion and struck Pier 17, the southern pier that supported the central span of the continuous through-truss of the Francis Scott Key Bridge. A portion of the bridge subsequently collapsed into the river, and portions of the pier, deck, and truss spans collapsed onto the vessel’s forward deck (see figure 1). . .
. . . The Key Bridge’s pier protection was struck in 1980 when the 390-foot-long Japan-flagged containership Blue Nagoya, which had a displacement or weight about one-tenth that of the Dali, collided with Pier 17 following a loss of steering about 600 yards from the bridge; see figure 4 for a size comparison of the Blue Nagoya to the Dali. . .
Even I was taken aback by the two figures, and I study this stuff! Just look at the differences in containership sizes and you’d think even more disasters must lie in waiting wherever such infrastructures have not grown in size and scope relative to the demands placed on them.
Now, of course, there are those who would blame my perceptiions on all those distorting cognitive biases–anchoring, salience, selection–as if they were trained incapacities on my part. But guys, we’ve learned to worry about problems where physical capacity of infrastructures do not grow with their physical demand!
Even though true, that point doesn’t go far enough. The more important point is the empirical insight from the high reliability literature: A complex sociotechnical system is reliable only until its next failure. That is, we need to know more about how the current system is managed in real time beyond its technology and design in order to avoid failures.
This difference may well be more than “What are to me very, very narrow safety buffers are to them manageable ones.” I haven’t studied harbor pilots, but the infrastructure operators we studied distinguish between uncertainties to be managed and unstudied conditions in which not to operate. Operators we’ve talked with call this their “comfort zone,” though as one control room supervisor hastened to add, “I’m always uncomfortable.” How so is illustrated in the following table:
High reliability management professionals we study practice vigilance to stay out of the red area below the diagonal and stay within the area above it—a stylized version of their comfort zone. To maintain this level of comfort they tolerate some uncertainty about outcomes (Cf) matched by having high confidence in some probabilities (Pf). They also tolerate some uncertainty about probabilities by having higher confidence that consequences are limited. Management within these uncertainties is in either case supported by team situation awareness in the control room. In other words, the professionals seek to avoid unknown unknowns by extending but limiting their management to known unknowns—uncertainties with respect to outcomes and probabilities they can tolerate as part of their comfort zone as risk managers (Roe and Schulman 2018).
This is a very important distinction for safety management in other critical infrastructures. Are there such reliability professionals when it comes to AI safety? Complex sociotechnical systems have by definition complex technical cores, about which real-time operators do not have full and complete causal knowledge. So too by extension opaque AI algorithms are a concern, but not a new concern. Unstudied and unstudiable conditions have always been an issue under mandates for the safe and continuous provision of a critical infrastructure’s service in the face of various and variable task environments. The key issue then is: What uncertainities with respect to probabilities and consequences are they managing for when it comes to AI safety so as to avoid operating ignorantly?
Source.
E. Roe and P.R. Schulman (2018). “A reliability & risk framework for the assessment and management of system risks in critical infrastructures with central control rooms. Safety Science 110 (Part C): 80-88.
The paragraph I’ve just read is immediately bookended by two quotes:
Just before: “Therefore, rather than being schools of democracy, ACs [associative councils] may be spaces where associative and political elites interact and, therefore, just reproduce existing political inequities (Navarro, 2000). Furthermore, these institutions may have limited impact in growing and diversifying the body of citizens making contributions to public debate (Fraser, 1990).”
Just after: “The professionalised model results from a complex combination of inequalities in associationism and a specific type of participation labour. Analysing the qualitative interviews, regulations and documents was fundamental to understanding the underlying logic of selecting professionals as the main components.”
Now try to guess the gist of the paragraph in between. More of the same? Well, no. Six paragraphs from the article’s end emerges an “and-yet” that had been there from the beginning:
Nevertheless, an alternative interpretation of professionalisation should be considered. The fact that ACs perform so poorly in inclusiveness does not mean that they are not valuable for other purposes, such as voicing a plurality of interests in policymaking (Cohen, 2009). In this respect, participants can act as representatives of associations that, in many cases, promote the needs of oppressed and exploited groups (De Graaf et al., 2015; Wampler, 2007). Suffice it to say, for example, that labour unions or migrants’ associations frequently send lawyers or social workers to ACs to defend their needs and positions. Problems with inclusion should not take away from other purposes, that is, struggles to introduce critical issues and redistribution demands to the state agenda. Other studies have already shown that groups make strategic decisions to achieve better negotiation outcomes in the context of technical debates (Grillos, 2022). Thus, the choice of selecting professionals can be a strategy to improve the capacity of pressure in institutional spaces dominated by experts. (my bold; accessed online at https://journals.sagepub.com/doi/10.1177/00323217251319065)
Methodological upshot: What the counterfactual is to economic analysis, the and-yet counternarratives are to policy analysis. What would have happened in the former is contrasted to what else is happening during the latter.
I’m first asking you to look and listen to one of my favorites, a short video clip of Anna Caterina Antonacci and Andreas Scholl singing the duet, “I embrace you,” from a Handel opera:
Antonacci’s visual performance resonates with the final scene in Sunset Boulevard, where Gloria Swanson, as actress Norma Desmond, walks down the staircase toward the camera. But intertextuality–that two-way semi-permeability between genres–also works the other way around. Antonacci brings the opera diva into Swanson’s actress, and to hell with anachronism and over-the-top.
II
Now bring semi-permeable intertextuality closer to public policy and management. Zakia Salime (2022) provides a rich case study of refusal and resistance by Moroccan villagers to nearby silver mining–in her case, parsed through the lens of what she calls a counter-archive:
Oral poetry (timnadin), short films, petitions, letters and photographs of detainees disrupted the official story of mining ‘as development’ in state officials’ accounts, with a collection of rebellious activities that exposed the devastation of chemical waste, the diversion of underground water, and the resulting dry collective landholdings. Audio-visual material and documents are still available on the movement’s Moroccan Facebook page, on YouTube and circulating on social media platforms. The [village] water protectors performed refusal and produced it as a living record that assembled bodies, poetic testimonials, objects and documents
What, though, when the status quo is itself a counter-archive? Think of all the negative tweets, billions of them. Think of all negative comments on politics, dollars and jerks in the Wall Street Journal or Washington Post. That is, think of these status quo repositories as a counter-archive of “status-quo critique and dissent.”
III
So what?
A genre notion of the status quo as counter-archive raises an interesting possibility: a new kind of long-run that is temporally long because it is presently intertextual, indefinitely forwards and back and cross different genres.
For example, if the climate emergency is violence and the Big Polluters are culprits, then violent resistance against them is a form of violence reduction if the resistance succeeds. This means the “violence” and the “resistance” are difficult to evaluate, let alone predict, because the long-run over which they are to take is itself a current but unavoidably changing intertext.
The long-run seeps across counter-archives as much as archives. As in: “the varieties of revolution do not know the secrets of the futures, but proceed as the varieties of capitalism do, exploiting every opening that presents itself”–to paraphrase political philosopher, Georges Sorel.