Think long term from the small scale and in real time for the large scale

–I attended a presentation on an ecosystem restoration project in Montana. One project leader described what was for him the key contribution: The approach gave him a way to integrate the small and large scale.

Not only could the project leader stand in his woodlot and see how it fit in with the larger scale of the ecosystem and landscape, he was able to plan at the smaller scale for the longer term. He was able to aspire to a future for his forested lot.

–So too for the ecosystem manager standing at the ridge overlooking the valley. She looked down and was able to plan at the larger scale for the shorter term. She could now see what the next steps ahead were when it came to managing the larger ecosystem.

“Think globally, act locally” had become “Think long term from the small scale and in real time for the large scale.”

Escaping from Hell Is a Right!

(You need some 30 minutes for this entry.)

Set to music by Frederic Rzewski (JEV-skee), the first part of his Coming Together is based on text from a letter of Sam Melville, anti-war protester and convicted bomber, who was incarcerated at Attica. He was shot and killed in the 1971 Attica prison uprising.

(You may have to pull the play bar back fully to the left; no proprietary claim is made to this link or material)

Part Two, ‘Attica,’ uses the reply of another uprising leader, Richard X. Clark, just after being release. Asked how it felt leaving Attica behind, he said: “Attica is in front of me.” Rzewski draws for me hope to and from those words.

Principal source

Grégory Salle (2018). “Escaping from Hell Is a Right!”: The Case of France’s ‘Q.H.S.’ (1975–1982).” Chapter 7 In: Prison Breaks–Toward a Sociology of Escape (eds. T.M. Martin and G. Chantraine), Springer eBook (https://link.springer.com/book/10.1007/978-3-319-64358-8).

Rethinking crisis scenarios and response

Thinking differently about the unimaginable

First, there is the predictably unimaginable that comes with the new. Think here of “violent crime” as a legal category in the US that didn’t exist prior to the 1970s. “Speaking of ‘political prisoners’ had become such a major political criticism that it was no longer possible to imagine it as a legal category,” concludes another. That new categories and conventions are ahead that we don’t now imagine is quite predictable.

Second, that there are analogies to redescribe current policy problems is also predictable. The Green New Deal has most often been likened to Roosevelt’s New Deal. It’s also been likened to the Civil Rights Movement, 19th century abolitionism, and the war economy of the Bolshevik Revolution. There should be no doubt that the climate emergency has been or will be compared to many other events you and I won’t imagine until that comparison is made.

Third, earthquakes with unimaginable impacts are predicted all the time. That in fact is the genre convention. It’s no different than predicting that experience after my death will be the same as experience before my being conceived.

Thinking differently about implementation scenarios

The authors of a fine report concluded that significant gaps exist between what was proposed in the EU AI Act (concerning artificial intelligence) and the then existing EU digital legislation (formally “the digital acquis”):

We identify eight key areas where challenges may emerge, and make the following policy recommendations: . . .

https://www.ceps.eu/ceps-publications/the-ai-act-and-emerging-eu-digital-acquis/)

In my view, the first question we ask is not, “Who’s going to adopt the recommendations and, if so, with what modifications?” but rather: “Who would implement the finalized recommendations and what are implementors’ scenarios for failing to do so?” This acknowledges the longstanding role of implementation as de facto policymaking.

Thinking differently about pre-disaster mitigations

Retrofitting a bridge pre-disaster isn’t a chancy wager on what might or might not happen to the bridge. Retrofitting is managing latent interconnectivities between bridges and other infrastructures that become manifest during and immediately after the disaster. That inter-infrastructural connections will shift and these shifts will involve bridges is far more predictable than this or that bridge will fail, unless retrofitted.

This means attention is crucial to the track record in retrofitting bridges before and after disasters, here and elsewhere. Note the implication: Retrofitting has to occur in order to have a track record to monitor and learn from.

Since there are real material and cognitive limits on controlling inter-infrastructural connectivity at any point in time, doing more by way of managing the pre-disaster latency of interconnectivities is elemental. Bringing water systems up to power’s better practices is a way of managing latent interconnectivity in advance of disaster, one interviewee told us.

Thinking differently about risk management for crises

I

What to make of the debacle of Sam Bank-Friedman’s cryptocurrency firm?

“I wasn’t even trying, like, I wasn’t spending any time or effort trying to manage risk on FTX,” Mr. Bankman-Fried said. Echoed a co-head of digital asset trading in Citigroup about FTX, “The thing that I picked up on immediately that was causing us heartburn was the complete lack of a risk-management framework that they could articulate in any meaningful way.”

Before it was the wrong framework for managing risks; now the problem is having no framework at all. But how could FTX not have risk managers, albeit of sorts and not formalized?

II

In answer, let’s recast the issue. Risk and risk managers were around long before risk management frameworks and registries had been formalized. Think Christians being around from the time of Jesus to the time of formalizing the Scriptures in 4th century AD at the Council of Nicaea, How did Christians operate in the 300 years between? Can we think of Bank-Friedman and his FTX colleagues (and other cult-entrepreneurs) in the same way as these early Christians?

No wonder the guardians of current frameworks might want to convince us the FTX debacle has nothing to offer by way of lessons learned.

Thinking differently about crisis leadership

The literature on crisis leadership is largely top down (leaders direct) or bottom up (self-organizing crisis response), where networks are said to be vertical (hierarchical) or horizontal (laterally interacting).

We add a third category: control rooms, and not just in terms of Incident Command Centers during the emergency but already-existing infrastructure control rooms whose staff continue to operate during the emergency.

Paul Schulman and I argue control rooms are a unique organizational formation meriting society protection, even during (especially during) continued turbulence. They have evolved to take hard systemwide decisions under difficult conditions that require a decision, now. Adding this third is to insist on real-time large-system management as the prevention of major failures and thus crises that would have happened had not control room managers, operators and support staff prevented them.

Thinking differently about predictions

As I remember the to-ing and fro-ing over the introduction of Bt cotton in India, saving on insecticides was the putative plus and runaway GM crops the putative negative. I know nothing about the subsequent record but suspect that actual findings must have been differentiated, as any such findings, by region and other demographics.

All this came back to me when I read the following passage describing a conference paper on Bt cotton:

Ambarish Karamchedu presented on Dried up Bt cotton narratives: climate, debt and distressed livelihoods in semi-arid smallholder India. Proponents of this ‘technical fix’ position GMO crops as a triple win. India has semi-arid and arid areas where rural poverty is concentrated, with an intense monsoon season (3-4 months), making farming a challenge. BT cotton introduced around 1995, thrives here. India is the biggest cotton cultivator and Bt cotton is grown by 7 million smallholder farmers, 66 percent in semi-arid areas with poor soils and low rainfall prone to monsoon. In Telangana, 65% of farmers across all classes produce BT cotton, with good harvests for 5 years, after which they decline. Failure of farmers who face increased input prices have to resort to non-farm incomes. The triple win technological fix narrative perpetuates and exacerbates the problems it seeks to solve, and benefits farmer institutions rather than enriching farmer knowledge and practice.

https://drive.google.com/drive/folders/1VfvjJlxB9VPKQj55dNbZ_VH6oPi2IEVd

It’s that “with good harvests for 5 years, after which they decline” that grabbed my attention. Did anyone predict that for Bt cotton?

This matters, because in the absence of any such prediction, why not also conclude: “Well, five years is five years more than the critics expected?”

Thinking differently about luck in infrastructure crises

Ensuring systemwide service reliability has always involves a measure of luck in major critical infrastructures. Control room operators will tell you this. At its most abstract, good luck can be defined as the non-occurrence of system failure in the absence of exercising failure avoidance options, while bad luck is the occurrence of failure in the presence of exercising those options.

But luck also favors the well-prepared, and well-prepared operators make a difference. Consider how a senior operations engineer for a transmission grid described a close call to us:

. . . We nearly caused a voltage collapse all over the western grid. Everything was going up and down, we were trying to get power from all the nuclear units in the western grid. Life flashed before our eyes. And then the gen dispatcher did intuitively the right thing. He said, Shut one pump down. How he figured that, I still don’t understand. It was something we had never seen before. We had no procedures. . .We went back and looked at it, and the planner said, Oh yeah, you should never have been running three pumps, and we said, Where did you get that from? So we started writing new procedures.

When talent meets opportunity, the value added by professionals is stopping close calls and near misses from tripping into system failures. That there can be no guarantees makes it luck.

When regulation renders the noun, safety, into the adverb, unsafely


1. From a high reliability management perspective, regulation for safety in large socio-technical systems is dispersed. The regulation of critical infrastructures for system safety is not just what the regulators do; it is also what the infrastructures do in ways that their regulator of record can’t do on its own. Those who have the real-time information must fulfill regulatory functions with respect to system safety that the official regulator is not able to fulfill.

2. The dispersed functions of regulations for system safety put a premium on understanding real-time practices of control room operators and field staff in these large systems. Safety, if it is anything, is found in practices-as-undertaken, i.e., “it’s operating safely.” This means safety is best understood more as an adverb, not as a noun. You can no more make safety than you can make fish from fish soup.

3. It makes little sense then for critics to conclude that regulators are failing because formal regulations are not being complied with, if the infrastructures are managing in a highly reliable fashion and would not be doing so if they followed those regulations to the letter. In practical terms, this means there is not just the risk of regulatory non-compliance by the infrastructure, there is also the infrastructure’s risk of compliance with incomplete regulations.

4. Another way to put such examples is that, when it comes to managing safely, there is a major difference between error avoidance and risk management. Not taking advantage of opportunities to improvise and communicate laterally is a known error to avoid in immediate emergency response. Unlike risks to be managed more or less, these errors are to be avoided categorically, yes or no. What is most important about error avoidance is missing those real opportunities that shouldn’t or can’t be missed where the logic, clarity and urgency of “this is or is not responding safely” are evident.

5. If points 14 hold, the challenge then is to better understand the institutional niche of critical infrastructures, that is, how infrastructures themselves function in allocating, distributing, regulating and stabilizing system safety (and reliability) apart from the respective government regulators of record.

6. With that in mind, turn now to the relationship between system risk and system safety, specifically: regulating risk in order to ensure system safety. For some, the relationship is explicit, e.g., increasing safety barriers reduces risk of component or system failure.

In contrast, I come from a field, policy analysis and management, that assumes safety and risk are to be treated differently, unless otherwise shown in the case at hand. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two. The reasons are many for not assuming that “reduce risks and you increase safety” or “increase safety and you reduce risks.” In particular:

However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.“

Danner and Schulman, 2019

7. So what?

That informed people continue to stay in earthquake zones and sail in stormy seas even if they can move away from both tells you something about their preferences for system safety, let alone personal safety. For it is often safety with respect to the known unknowns of where they live and work versus safety with respect to unknown-unknowns of “getting away.” Unknowns, not risks.


Let’s shift gears to a different example and extension.

On March 26, 2024, about 0129 eastern daylight time, the 984-foot-long Singapore-flagged cargo vessel (containership) Dali was transiting out of Baltimore Harbor in Baltimore, Maryland, when it experienced a loss of electrical power and propulsion and struck Pier 17, the southern pier that supported the central span of the continuous through-truss of the Francis Scott Key Bridge. A portion of the bridge subsequently collapsed into the river, and portions of the pier, deck, and truss spans collapsed onto the vessel’s forward deck (see figure 1). . .

. . . The Key Bridge’s pier protection was struck in 1980 when the 390-foot-long Japan-flagged containership Blue Nagoya, which had a displacement or weight about one-tenth that of the Dali, collided with Pier 17 following a loss of steering about 600 yards from the bridge; see figure 4 for a size comparison of the Blue Nagoya to the Dali. . .

From the Marine Investigation Report for this accident (accessed online at https://www.ntsb.gov/investigations/AccidentReports/Reports/MIR2510.pdf)

Even I was taken aback by the two figures, and I study this stuff! Just look at the differences in containership sizes and you’d think even more disasters must lie in waiting wherever such infrastructures have not grown in size and scope relative to the demands placed on them.

Now, of course, there are those who would blame my perceptions on all those distorting cognitive biases–anchoring, salience, selection–as if they were trained incapacities on my part. But people, we’ve learned to worry about problems where physical capacity of infrastructures do not grow with their physical demand!

Even though true, that point doesn’t go far enough. The more important point is the empirical insight from the high reliability literature: A complex sociotechnical system is reliable only until its next failure. That is, we need to know more about how the current system is managed in real time beyond its technology and design in order to avoid failures. How managing safely means more than regulating for safety.

Or in case of the tanker, we need to know, inter alia, how experienced harbor pilots bringing the tankers into port manage these tankers under those current conditions (see a pilot’s perspective on the accident at https://theconversation.com/ive-captained-ships-into-tight-ports-like-baltimore-and-this-is-how-captains-like-me-work-with-harbor-pilots-to-avoid-deadly-collisions-226700). I mention harbor pilots because their definitions of a “near miss”–which they’ve experienced–and my definition of near miss–just look at how close the tanker’s antennae are to figure 4!–vary significantly.

This difference may well be more than “What are to me very, very narrow safety buffers are to them manageable ones.” I haven’t studied harbor pilots, but the infrastructure operators we studied distinguish between uncertainties to be managed and unstudied conditions in which not to operate. Operators we’ve talked with call this their “comfort zone,” though as one control room supervisor hastened to add, “I’m always uncomfortable.” How so is illustrated in the following table:

High reliability management professionals we study practice vigilance to stay out of the red area below the diagonal and stay within the area above it—a stylized version of their comfort zone. To maintain this level of comfort they tolerate some uncertainty about outcomes (Cf) matched by having high confidence in some probabilities (Pf). They also tolerate some uncertainty about probabilities by having higher confidence that consequences are limited. Management within these uncertainties is in either case supported by team situation awareness in the control room. In other words, the professionals seek to avoid unknown unknowns by extending but limiting their management to known unknowns—uncertainties with respect to outcomes and probabilities they can tolerate as part of their comfort zone as managers of operational risks and safety (Roe and Schulman 2018).

This is a very important distinction for managing safely in other critical infrastructures. Are there such reliability professionals when it comes, say, to “AI safety” (more formally, when it comes to the adverbial properties of performing safely or not)?

Complex sociotechnical systems have by definition complex technical cores, about which real-time operators do not have full and complete causal knowledge. So too by extension opaque AI algorithms are a concern, but not a new concern. Unstudied and unstudiable conditions have always been an issue under mandates for the safe and continuous provision of a critical infrastructure’s service in the face of various and variable task environments. The key issue then is: What uncertainties with respect to probabilities, uncertainties and consequences are they managing for when it comes to “AI safety” so as to avoid operating (acting, performing) ignorantly? Or more formally, when does avoiding error in real time require more than regulating for the management of risks?


Sources

Danner, C., and P. Schulman (2019). “Rethinking risk assessment for public utility safety regulation.” Risk Analysis 39(5): 1044-1059.

E. Roe and P.R. Schulman (2008). High Reliability Management. Stanford University Press, Stanford CA.

————————- (2018). “A reliability & risk framework for the assessment and management of system risks in critical infrastructures with central control rooms. Safety Science 110 (Part C): 80-88.

To my knowledge, philosophers Gilbert Ryle and Michael Oakeshott, are among those who first discussed the importance of recasting “thinking” and “behavior” in terms of adverbs rather than as common nouns.

Some safety extensions from the high reliability literature

On March 26, 2024, about 0129 eastern daylight time, the 984-foot-long Singapore-flagged cargo vessel (containership) Dali was transiting out of Baltimore Harbor in Baltimore, Maryland, when it experienced a loss of electrical power and propulsion and struck Pier 17, the southern pier that supported the central span of the continuous through-truss of the Francis Scott Key Bridge. A portion of the bridge subsequently collapsed into the river, and portions of the pier, deck, and truss spans collapsed onto the vessel’s forward deck (see figure 1). . .

. . . The Key Bridge’s pier protection was struck in 1980 when the 390-foot-long Japan-flagged containership Blue Nagoya, which had a displacement or weight about one-tenth that of the Dali, collided with Pier 17 following a loss of steering about 600 yards from the bridge; see figure 4 for a size comparison of the Blue Nagoya to the Dali. . .

From the Marine Investigation Report for this accident (accessed online at https://www.ntsb.gov/investigations/AccidentReports/Reports/MIR2510.pdf)

Even I was taken aback by the two figures, and I study this stuff! Just look at the differences in containership sizes and you’d think even more disasters must lie in waiting wherever such infrastructures have not grown in size and scope relative to the demands placed on them.

Now, of course, there are those who would blame my perceptiions on all those distorting cognitive biases–anchoring, salience, selection–as if they were trained incapacities on my part. But guys, we’ve learned to worry about problems where physical capacity of infrastructures do not grow with their physical demand!

Even though true, that point doesn’t go far enough. The more important point is the empirical insight from the high reliability literature: A complex sociotechnical system is reliable only until its next failure. That is, we need to know more about how the current system is managed in real time beyond its technology and design in order to avoid failures.

Or in case of the tanker, we need to know, inter alia, how experienced harbor pilots bringing the tankers into port manage these tankers under those current conditions (see a pilot’s perspective on the accident at https://theconversation.com/ive-captained-ships-into-tight-ports-like-baltimore-and-this-is-how-captains-like-me-work-with-harbor-pilots-to-avoid-deadly-collisions-226700). I mention harbor pilots because their definitions of a “near miss”–which they’ve experienced–and my definition of near miss–just look at how close the tanker’s antennae are to figure 4!–vary significantly.

This difference may well be more than “What are to me very, very narrow safety buffers are to them manageable ones.” I haven’t studied harbor pilots, but the infrastructure operators we studied distinguish between uncertainties to be managed and unstudied conditions in which not to operate. Operators we’ve talked with call this their “comfort zone,” though as one control room supervisor hastened to add, “I’m always uncomfortable.” How so is illustrated in the following table:

High reliability management professionals we study practice vigilance to stay out of the red area below the diagonal and stay within the area above it—a stylized version of their comfort zone. To maintain this level of comfort they tolerate some uncertainty about outcomes (Cf) matched by having high confidence in some probabilities (Pf). They also tolerate some uncertainty about probabilities by having higher confidence that consequences are limited. Management within these uncertainties is in either case supported by team situation awareness in the control room. In other words, the professionals seek to avoid unknown unknowns by extending but limiting their management to known unknowns—uncertainties with respect to outcomes and probabilities they can tolerate as part of their comfort zone as risk managers (Roe and Schulman 2018).

This is a very important distinction for safety management in other critical infrastructures. Are there such reliability professionals when it comes to AI safety? Complex sociotechnical systems have by definition complex technical cores, about which real-time operators do not have full and complete causal knowledge. So too by extension opaque AI algorithms are a concern, but not a new concern. Unstudied and unstudiable conditions have always been an issue under mandates for the safe and continuous provision of a critical infrastructure’s service in the face of various and variable task environments. The key issue then is: What uncertainities with respect to probabilities and consequences are they managing for when it comes to AI safety so as to avoid operating ignorantly?


Source.

E. Roe and P.R. Schulman (2018). “A reliability & risk framework for the assessment and management of system risks in critical infrastructures with central control rooms. Safety Science 110 (Part C): 80-88.

And-yet counternarratives

The paragraph I’ve just read is immediately bookended by two quotes:

Just before: “Therefore, rather than being schools of democracy, ACs [associative councils] may be spaces where associative and political elites interact and, therefore, just reproduce existing political inequities (Navarro, 2000). Furthermore, these institutions may have limited impact in growing and diversifying the body of citizens making contributions to public debate (Fraser, 1990).”

Just after: “The professionalised model results from a complex combination of inequalities in associationism and a specific type of participation labour. Analysing the qualitative interviews, regulations and documents was fundamental to understanding the underlying logic of selecting professionals as the main components.”

Now try to guess the gist of the paragraph in between. More of the same? Well, no. Six paragraphs from the article’s end emerges an “and-yet” that had been there from the beginning:

Nevertheless, an alternative interpretation of professionalisation should be considered. The fact that ACs perform so poorly in inclusiveness does not mean that they are not valuable for other purposes, such as voicing a plurality of interests in policymaking (Cohen, 2009). In this respect, participants can act as representatives of associations that, in many cases, promote the needs of oppressed and exploited groups (De Graaf et al., 2015; Wampler, 2007). Suffice it to say, for example, that labour unions or migrants’ associations frequently send lawyers or social workers to ACs to defend their needs and positions. Problems with inclusion should not take away from other purposes, that is, struggles to introduce critical issues and redistribution demands to the state agenda. Other studies have already shown that groups make strategic decisions to achieve better negotiation outcomes in the context of technical debates (Grillos, 2022). Thus, the choice of selecting professionals can be a strategy to improve the capacity of pressure in institutional spaces dominated by experts. (my bold; accessed online at https://journals.sagepub.com/doi/10.1177/00323217251319065)

Methodological upshot: What the counterfactual is to economic analysis, the and-yet counternarratives are to policy analysis. What would have happened in the former is contrasted to what else is happening during the latter.

A different long-run for policy and management: As intertext of archive and counter-archive

I

I’m first asking you to look and listen to one of my favorites, a short video clip of Anna Caterina Antonacci and Andreas Scholl singing the duet, “I embrace you,” from a Handel opera:

Antonacci’s visual performance resonates with the final scene in Sunset Boulevard, where Gloria Swanson, as actress Norma Desmond, walks down the staircase toward the camera. But intertextuality–that two-way semi-permeability between genres–also works the other way around. Antonacci brings the opera diva into Swanson’s actress, and to hell with anachronism and over-the-top.

II

Now bring semi-permeable intertextuality closer to public policy and management. Zakia Salime (2022) provides a rich case study of refusal and resistance by Moroccan villagers to nearby silver mining–in her case, parsed through the lens of what she calls a counter-archive:

Oral poetry (timnadin), short films, petitions, letters and photographs of detainees disrupted the official story of mining ‘as development’ in state officials’ accounts, with a collection of rebellious activities that exposed the devastation of chemical waste, the diversion of underground water, and the resulting dry collective landholdings. Audio-visual material and documents are still available on the movement’s Moroccan Facebook page, on YouTube and circulating on social media platforms. The [village] water protectors performed refusal and produced it as a living record that assembled bodies, poetic testimonials, objects and documents

https://onlinelibrary.wiley.com/doi/full/10.1111/dech.12726

What, though, when the status quo is itself a counter-archive? Think of all the negative tweets, billions of them. Think of all negative comments on politics, dollars and jerks in the Wall Street Journal or Washington Post. That is, think of these status quo repositories as a counter-archive of “status-quo critique and dissent.”

III

So what?

A genre notion of the status quo as counter-archive raises an interesting possibility: a new kind of long-run that is temporally long because it is presently intertextual, indefinitely forwards and back and cross different genres.

For example, if the climate emergency is violence and the Big Polluters are culprits, then violent resistance against them is a form of violence reduction if the resistance succeeds. This means the “violence” and the “resistance” are difficult to evaluate, let alone predict, because the long-run over which they are to take is itself a current but unavoidably changing intertext.

The long-run seeps across counter-archives as much as archives. As in: “the varieties of revolution do not know the secrets of the futures, but proceed as the varieties of capitalism do, exploiting every opening that presents itself”–to paraphrase political philosopher, Georges Sorel.

What they don’t tell you in Safety 101: “when operator error is not a mistake”

–There is an under-appreciated virtue in control room operators working within their shared comfort zone of team situation awareness, namely: their knowing when it is a mistake to comply with a regulation or protocol that would work against system reliability and safety.

Which goes to show that it’s a mistake to think all errors are mistakes.

–Noncompliance may be a regulatory error for the regulator of record; the same noncompliance may be an important option for system reliability when the task environment demonstrates the regulation to be defective. It’s not a control room mistake if system high reliability compels the real-time commission of a noncompliance error. Indeed, it is a regulatory function of critical infrastructures to correct for error by the regulator of record.

What needs to be distinguished are the volatility conditions and reliability mandates under which “operator error” is forced. True, you can’t un-ring the bell once rung, but it’s always been more complex than that.

Major read: How a composite argument remakes time and space from a policy palimpsest

A portrayal of this sort involves the identification of a sequence of concatenated ideas and propositions whose final outcome is necessarily hidden from the proponents of the individual links, at least in the early stages of the process; for they would have shuddered—and revised their thinking—had they realized where their ideas would ultimately lead.

A.O. Hirschman, The Passions and the Interests

I

“Policy palimpsest” is the social science notion that longstanding, controversial policies are themselves overwritten arguments and narratives accreted through time. A composite argument read off or from an issue’s policy palimpsest reads as if it were sequentially argued—sentences have nouns and verbs and sense is made. But the arguments assembled from the palimpsest have been blurred, intertwined and re-rendered for current (often controverted) purposes. None of the previous inscriptions shine clear through the palimpsest’s layer of effacements and erasures.

The methodological challenge is to read any composite argument with its blur visible in order to acknowledge and probe what has been rendered missing or opaque. Once you identify what is not in the composite argument but was in the palimpsest being read off (no guarantees here), you have identified possible means to recast complex the issue in new (renewed) ways.

How does this work and with what consequence?

II

First, consider the assembling strategy of the early 18th century French painter, Antoine Watteau. Art historian, Ewa Lajer-Burcharth, records how Watteau was an inveterate drawer of people—soldiers, women, children, their hands—in different poses and positions. A single sheet of allbum paper may have many such figures complied over time by Watteau, where his

drawings constituted a vast visual repertory from which he was known to have pulled figures and motifs at random, often transferring them mechanically…onto the canvas. Figures drawn on different sheets at different moments in time and without the intention of ever being linked together would find themselves paired in paintings, often in such intimate interactions that it is difficult to imagine they had not been sketched in such configurations in the first place.

Let me repeat that last part: “…often in such intimate interactions that it is difficult to imagine they had not been sketched in such configurations in the first place.”

Such is why we need to rethink time and space in a composite argument. While the sentences in a composite argument look to be linear and specific, time and space are actually the result of adjacent sequencing of discrete texts from elsewhere in the policy palimpsest. Temporal interactions and interstites arise out of that specific sequencing.

As such, time and space emerge from the composite argument. A familiar example is tagging onto today’s major policy arguments variants of “…in a world threatened by catastrophic climate change.” Any such adjacency rejiggers everything immediately before and after it. (The tagged-on menace could be global species extinction, late capitalism, planetary pandemics, the apocalyptic Carrington Effect–you choose, but the effect is similar.)

III

Philosopher, Ludwig Wittgenstein, helps us see more clearly what is going on. He writes in The Big Transcript:

In a story it says: “After he said that he left her, as he had done the day before.” If I am asked whether I understand this sentence, there’s no easy answer. It’s an English sentence and in that respect I understand it. I would know, for instance, how one could use this sentence. I could come up with a context of my own for it. And yet I don’t understand it in the same way I would understand it if I had read the story up to that point. (Cf language-games.)” [7e]

Replace “if I had read the story” with “if I had read the palimpsest,” and you get to the crux. The palimpsest can be read in two forms: effaced or filled in. In the first sense, the spaces in between the words, “After he said that he left her, as he had done the day before,” are just as important, if not more so, than the actual words. The spaces are left to signify all that has been left out. Not to see this is a failure of understanding what you are reading.

How so? Immediately after the above quote, Wittgenstein asks us to think of the sentence as if it were a painting:

What does it mean to understand a painted picture? Here too there is understanding and a failure to understand! And here too ‘understanding’ and ‘failure to understand’ can mean different things. –The picture represents an arrangement of objects in space, but I am incapable of seeing a part of the picture three-dimensionally; rather, in that part I see only patches of the picture. . .[M]aybe I know all of the objects, but – in another sense – don’t understand how they’re arranged.” [7e]

So too we understand the words in a composite argument but fail to understand the three-dimensionality of the palimpsest from which the composite has been patched together and assembled. The challenge for better understanding is to read each composite argument as carrying the entire policy palimpsest with it.

IV

Now to an example. Consider what for years has been a commonplace: “Nazi and communist totalitarianism became to mean total control of politics, economics, society and citizenry.”

In reality, that statement has been full of effacements from having been overwritten again and again through seriatim debates, vide: “……totalitarianism        became to mean…….total control               of politics                  ,citizenry and economics………”

It’s that accented “total control” that drove the initial selection of the phrases around it. Today, after further blurring, it’s more fashionable to rewrite the composite argument as: “Nazi and communist totalitarianism sought total control of politics, economics, society and citizenry.” The “sought” recognizes that, when it comes these forms of totalitarianism, seeking total control did not always mean total control achieved. “Sought” unaccents “total control.”

Fair enough, but note that “sought” itself reflects its own effacements in totalitarianism’s palimpsest, with consequences for how time and space are rejiggered. Consider two quotes from the many in that policy palimpsest, passed over all too often when it comes to that “sought”:

I always thought there must be some more interesting way of interpreting the Soviet Union than simply reversing the value signs in its propaganda. And the thing that first struck me – that should have struck anybody working in the archives of the Soviet bureaucracy – was that the Soviet leaders didn’t know what was happening half the time, were good at throwing hammers at problems but not at solving them, and spent an enormous amount of time fighting about things that often had little to do with ideology and much to do with institutional interests.
https://www.lrb.co.uk/v32/n23/sheila-fitzpatrick/a-spy-in-the-archives

The camp, then, was always in motion. This was true for people and goods, and also for the spaces they traversed. Because Auschwitz was one big construction site. It never looked the same, from one day to the next, as buildings were demolished, extended and newly built. . .

Inadvertently, [construction] also created spaces for prisoner agency. The more civilian contractors worked on site, the more opportunities for barter and bribes. All the clutter and commotion also made it harder to exercise full control, as blocked sightlines opened the way for illicit activities, from rest to escape. . .

Some scholars see camps like Auschwitz as sites of total SS domination. This was certainly what the perpetrators wanted them to be. But their monumental designs often bore little resemblance to built reality. Priorities changed, again and again, and SS planners were thwarted by supply shortages, bad weather and (most critically) by mass deaths among their slave labour force. In the end, grand visions regularly gave way to quick fixes, resulting in what the historian Paul Jaskot, writing about the architecture of the Holocaust, called the “lack of a rationally planned and controlled space”. Clearly, the popular image of Auschwitz as a straight-line, single-track totalitarian machine is inaccurate.

https://www.the-tls.co.uk/articles/being-in-auschwitz-nikolaus-wachsmann/

I am not arguing that the quoted reservations are correct or generalizable or even fully comprehensible (the quotes obviously come to us as already overwritten). I am saying that they fit uncomfortably with notions of “taking back control,” as if control had already been complete rather than interstitial and emergent with respect to time and space.

V

So what?

By way of answer, “catastrophic cascades” are almost always described as having virtually instantaneous transitions from the beginning of a cascade in one infrastructure to its awful conclusion across other infrastructures connected to it.

But in the terminology presented here, a catastrophizing cascade isn’t so much a composite description with a reduced-form middle as it is a highly etiolated palimpsest where infrastructure interactions taking more granular time and space have been blotted out or leached away altogether. Instantaneous is what’s left by way of assembly the composite.

VI

To summarize, each composite is a rearrangement of the palimpsest’s elements-with-effacements from different layers and positions into, literally, the straight lines called sentences and arguments. These linear, sequential expressions are, however, meshes of interrupted time and space tethered in multiple places to the entire policy palimpsest.

The analogy I have in mind is the way painter, Gerhard Richter, uses the squeegee in his photo-paintings and more recent chromatic work. He smears the surface photo or layers of paint and produces something new, seen for its first reading.

As such, no palimpsest is inscripted with the last word; no composite from it is indisputable. Each composite is allographic in the sense of having no one authoritative rendering. If a “readymade” is a mass-produced object elected by an artist for display as a work of art, a policy palimpsest is a “readyunmade,” one that is also mass-produced but constantly scored over by all manner of analyst inscriptions.

“Climate atrocity”

. . .this paper proposes framing climate harm as a “climate atrocity” and argues that the mass atrocity framework sheds light on the social mechanisms underlying this harm. . . .

Recognizing climate harm mechanisms within a broader atrocity narrative could enable individuals to gain clarity about their role in perpetuating this violence. Such recognition may prompt a deeper self-awareness concerning behaviors that contribute to climate violence, providing opportunities for more responsible and informed action.

At the collective level, identifying the systemic mechanisms driving climate harm within the mass atrocity framework may offer avenues for counteracting these destructive tendencies. As a diagnostic tool, the atrocity paradigm illuminates structural barriers to meaningful climate action rather than prescribing specific policies. A clearer understanding of how social, political, cultural and economic factors enable climate violence can inform policies and collective movements aimed at addressing unsustainable greenhouse gas emissions.

(https://www.nature.com/articles/s41599-025-04883-z; my bolding)

Even if the above were true as far as it goes, how is this anywhere good-enough for policy and management purposes?