When frustrations move center stage in analyzing pastoralist relations [revised Major Read]

I

It’s common to think of frustration as distinctly personal and individual–at least until I get to the point of explaining why you and others are cause the frustrations. At that point, frustrations aren’t just interpersonal; the type of relationship we have with each other may well not exist without the velcro of frustration. As the psychoanalyst, Adam Phillips, and others underscore, you know you’re connected precisely because it’s frustrating (e.g., Phillips 2012).

Take a random walk through the pastoralist literature and you will find explicit references to: frustrated pastoralists and communities (Nori 2022, passim); frustrated politicians in pastoralist areas (Allouche et al 2025, p.16); security forces frustrated by pastoralists (Casola 2022, p. 25); pastoralists frustrated by security forces (Scott-Villiers et al 2025, p. 29); pastoralists frustrated with researchers (Bell et al 2025, p.10/17), and frustrated researchers in pastoralist areas (Semplici et al 2024, p. 13). There are also frustrated younger men in pastoralist Kenya and their frustrated elders (Muneri 2024, passim; Hazama 2023, p. 267), along with frustrated—well, you get the picture.

Pastoralists are frustrated, researchers are frustrated, NGO staff are frustrated, and so too government officials. To repeat, that is how they know they’re connected: They frustrate others and others frustrate them. They wouldn’t be in these relationships if they weren’t frustrated.

So what? Now, here is where things get interesting from a pastoralist development perspective!

II

To start with, frustration highlights what’s missing in notions of “resilience in the face of uncertainty.” Handling their joint frustrations is what pastoralists, NGO staff, researchers, and government officials are to do between bouncing back and bouncing forward.

This is why it’s such a big issue to determine just with whom pastoralists are in fact interacting. Are they actually frustrated with this really-existing government official or that actually-existing NGO staff person? Or is it that the others are more a nuisance for them, if that? Is the researcher actually frustrated with the pastoralists s/he is studying and, if so, in what ways is that frustration keeping their interactions going? Answers to such questions problematize currently popular binaries like caring/not caring and not only notions of caring and private self/public role.

The follow-on analytic step then is to look at other major pastoralist binaries and see to what extent, if any, frustration relationships problematize them as well. The reader is already familiar with the debunking of any hard and fast line between nature/nurture and ecosystems/humans, so no need to repeat the familiar criticisms here. What does need highlighting, I believe, is the critics’ own use of another very popular binary, that of justice/injustice, as if there were in fact just systems which can or should correct for the equally well-known injustices pastoralists undergo and have undergone.

The twofold obstacle to any such conclusion is that (1) all manner of injustices are incurred without specific reference to principles or norms of justice and, anyway (2) those principles and norms prove contradictory, inconsistent or ambiguous when it comes to specific contexts (Douglass 2025). This is both an empirical and theoretical argument most recently associated with the political philosopher, Judith Shklar:

What sort of problem is injustice? One way of thinking about it is as an ethical problem. If not the first virtue of social institutions, justice is one of the most important moral values that should guide our reflections on politics. Injustice negates (or is a departure from) justice and is therefore a problem. Understood this way, there is a strong case for maintaining that we require principles of justice to evaluate cases of injustice: we can only identify the nature and scale of injustices with reference to some prior idea of justice. As should now be evident, this is not Shklar’s approach to theorizing injustice. She instead starts from our experiences of injustice and explores the political problems to which they give rise. The sense of injustice that we all experience should be understood in reference to the plural, competing, and ever-changing expectations that exist within any society, which cannot be formalized into determinate principles of justice. As this sense of injustice is a deep and inescapable feature of all social life, there is a political imperative to find ways of living together that can mitigate it as effectively as possible without (at the extreme) descending into cycles of violent revenge. To understand the problem of injustice in this way is to treat it as a political problem, first and foremost, rather than as an ethical one.

Such a sense of injustice repeatedly appears in the pastoralist literature (e.g., Krätli and Toulmin 2020, p. 68). That there is injustice across many pastoralist areas and that the frustrating challenge is a political one in preventing or coping with ensuing cycles of violence are neatly captured by many pastoralist observers, including Nori (2022; see also Benjaminsen and Ba 2021):

Political leaders, mafia-like organisations, and insurgent groups have successfully manipulated ethnic identities, political asymmetries, and local grievances to mobilise support for their activities. These provide weapons, salaries and opportunities to seize power at the local level, and with these the promise of redressing the many injustices faced by pastoralists and thereby transforming the local political economy. Cases include Islamic State and al-Qaeda in the Sahelo-Saharan fringes, Boko Haram in the Lake Chad region, Al-Shabaab in the Somali ecosystem, and other organisations operating across SSA drylands and beyond, where local communities are drawn into a ‘war economy’ dominated by politicians, smugglers commanders, and fighters whose interests lie in generating new forms of power, protection and profit. . .

In other words, it should not be surprising when existing local justice systems are commended for providing some everyday order and stability (e.g., Scott-Villiers 2025, p.35).

Why? Because even weak systems demonstrate the frustrating–really, frustrating–importance of giving injustice and grievances their due, whatever the global justice systems appealed to (see also Douglass 2025; on local and global justice systems and their tensions, start with Elster 1992).


Sources

Allouche, J., C.Y. Yao, K.S. Amédée 2025. “Rethinking ‘Farmer-Herder’ Conflicts in Ivorian Internal Frontier.” African Affairs 123/493: 449–467 (access online at https://academic.oup.com/afraf/article/123/493/449/7951617)

Bell, A. R., O. S. Rakotonarivo, W. Zhang, C. De Petris, A. Kipchumba, R. S. Meinzen-Dick. 2025. “Understanding pastoralist adaptations to drought via games and choice experiments: field testing among Borana communities.” Ecology and Society 30(1) (accessed online at https://doi.org/10.5751/ES-15836-300125)

Benjaminsen, T.A., B. Ba 2021. “Fulani-Dogon Killings in Mali: Farmer-herder conflicts as insurgency and counterinsurgency.” African Security (accessed online at https://www.tandfonline.com/doi/full/10.1080/19392)

Douglass, R. 2025. “Who Needs a Theory of Justice? Judith Shklar and the Politics of Injustice.” American Political Science Review: 1–12 (accessed online at http://cambridge.org/core/journals/american-political-science-review/article/who-needs-a-theory-of-justice-judith-shklar-and-the-politics-of-injustice/5B25A4AF90526DAE217F93E87765E074)

Elster, J. 1992. Local Justice: How Institutions Allocate Scarce Goods and Necessary Burdens, Russell Sage Foundation: New York NY

Hazama, I. 2023. “Man-Animal Social Relationship as Source of Resilience,” Chapter 9 in Reconsidering Resilience in African Pastoralism: Toward a Relational and Contextual Approach, Eds. S. Konaka, G. Semplici and P. Little, Kyoto University Press and Trans Pacific Press, Japan

Krätli S., C. Toulmin 2020. Farmer–Herder Conflict in Sub-saharan Africa? IIED, Briefing. International Institute for Environment and Development, London (accessed online at http://pubs.iied. org/17753IIED)

Muneri, E.W. 2024. Intersectional Subjectivities, Embodied Experiences, and Everyday Responses among the Maasai Pastoralists Amidst Environmental Changes: Insights from the Mara in Kenya, PhD dissertation, Institute of Development Studies at the University of Sussex (accessed online at https://sussex.figshare.com › ndownloader › files)

Nori, M. 2022. Assessing the Policy Frame in Pastoral Areas of Sub-Saharan Africa (SSA). Research Paper No. RSC 2022/03, Robert Schuman Centre for Advanced Studies, Italy (accessed online at SSRN: https://ssrn.com/abstract=4071572 or http://dx.doi.org/10.2139/ssrn.4071572)

Phillips, A. 2012. Missing Out: In Praise of the Unlived Life. Farrar, Straus and Giroux, New York NY.

Scott-Villiers, P., A. Scott-Villiers, and the team from Action for Social and Economic Progress, Somalia 2025. Navigating Violence and Negotiating Order in the Somalia–Kenya Borderlands. IDS Working Paper 618, Brighton: Institute of Development Studies (accessed online at https://opendocs.ids.ac.uk/articles/report/Navigating_Violence_and_Negotiating_Order_in_the_Somalia_Kenya_Borderlands/28715012?file=53375021)

Semplici, G., L.J, Haider, R. Unks, T.S. Mohamed, G. Simula, P. Tsering (Huadancairang), N. Maru, L. Pappagallo, M. Taye 2024. “Relational resiliences: reflections from pastoralism across the world.” Ecosystems and People20(1) (accessed online at https://doi.org/10.1080/26395916.2024.2396928)

West, S., L.J. Haider, S. Stålhammar & S. Woroniecki 2020. “A relational turn for sustainability science? Relational thinking, leverage points and transformations.” Ecosystems and People, 16:1, 304-325 (accessed online at https://www.tandfonline.com/doi/full/10.1080/26395916.2020.1814417)

Marina Warner on “Un train peut en cacher un autre [One train can hide another]” and its implications for hindsight as improvisation

The apparent meaning [of an event] conceals another [event], as in that famous French warning on level crossings, “Un train peut en cacher un autre” [One train can hide another]. Isaac, about to be sacrificed by his father, comes to stand for Jesus, who died to save humankind. . . .The figure forms in the present moment of the text under one’s eyes, but the latent meanings emerge both in the past and look forward to the future: the narrowly averted sacrifice of Isaac prefigures the death of Jesus, for example. “How much more fulfilling is the new idea that pre-Christian times can be read as a shadowy figure of what actually was to come?” writes [the literary critic, Edward] Said. . .

Marina Warner, historian and critic, accessed online at https://eprints.bbk.ac.uk/id/eprint/53273/1/53273.pdf)

At the actual train crossing, the implications of Warner’s analogy complicate the point she is making. What is invisible but then emerges, its own other rushing train, is disaster. Whatever else taking Isaac to prefigure Jesus is, it reveals real and present danger.

To see how and with what effect, turn to the commonplace: If manifest risk is where the probability of disaster (Pf) and the consequences of failure (Cf) are known or estimated, “latent risk” is when uncertainty over Pf or over Cf exists. Strictly speaking then, the challenge of trying to manage ahead latency is one of uncertainty management, not risk management as formally understood.

So what? For one thing when it comes to a major disaster, pre-existing latencies are joined at the hip with subsequent improvisations. Latent uncertainties unmanaged beforehand, particularly those that are invisible, necessitate improvisations in immediate emergency response afterwards.

You jump the gun by approaching the train tracks too early as the carriages disappear rapidly in front of you; how you survive, if at all, depends on pulling whatever good mess you can from the bad mess in which you find yourself. Surviving means improvising without any guarantees in riding out that uncertainty, whatever the lessons to be learned afterwards for “better risk management” at rail crossings.

Again, so what? “How much more fulfilling is the new idea that those old dangers and iffy responses can be read as a shadowy figure of the coming lessons learned for risk management.” Or to bring the point back and thereby complicate Warner’s point: Yes, it is easy in hindsight to see how Issac could be said to prefigure Jesus, though the connection being made between Isaac and Jesus is more improvisational given the high stakes involved than one might first want to admit.

Not false (apophenic) or even contingent as much as readymade for the immediate moment at hand.

Think long term from the small scale and in real time for the large scale

–I attended a presentation on an ecosystem restoration project in Montana. One project leader described what was for him the key contribution: The approach gave him a way to integrate the small and large scale.

Not only could the project leader stand in his woodlot and see how it fit in with the larger scale of the ecosystem and landscape, he was able to plan at the smaller scale for the longer term. He was able to aspire to a future for his forested lot.

–So too for the ecosystem manager standing at the ridge overlooking the valley. She looked down and was able to plan at the larger scale for the shorter term. She could now see what the next steps ahead were when it came to managing the larger ecosystem.

“Think globally, act locally” had become “Think long term from the small scale and in real time for the large scale.”

Escaping from Hell Is a Right!

(You need some 30 minutes for this entry.)

Set to music by Frederic Rzewski (JEV-skee), the first part of his Coming Together is based on text from a letter of Sam Melville, anti-war protester and convicted bomber, who was incarcerated at Attica. He was shot and killed in the 1971 Attica prison uprising.

(You may have to pull the play bar back fully to the left; no proprietary claim is made to this link or material)

Part Two, ‘Attica,’ uses the reply of another uprising leader, Richard X. Clark, just after being release. Asked how it felt leaving Attica behind, he said: “Attica is in front of me.” Rzewski draws for me hope to and from those words.

Principal source

Grégory Salle (2018). “Escaping from Hell Is a Right!”: The Case of France’s ‘Q.H.S.’ (1975–1982).” Chapter 7 In: Prison Breaks–Toward a Sociology of Escape (eds. T.M. Martin and G. Chantraine), Springer eBook (https://link.springer.com/book/10.1007/978-3-319-64358-8).

Rethinking crisis scenarios and response

Thinking differently about the unimaginable

First, there is the predictably unimaginable that comes with the new. Think here of “violent crime” as a legal category in the US that didn’t exist prior to the 1970s. “Speaking of ‘political prisoners’ had become such a major political criticism that it was no longer possible to imagine it as a legal category,” concludes another. That new categories and conventions are ahead that we don’t now imagine is quite predictable.

Second, that there are analogies to redescribe current policy problems is also predictable. The Green New Deal has most often been likened to Roosevelt’s New Deal. It’s also been likened to the Civil Rights Movement, 19th century abolitionism, and the war economy of the Bolshevik Revolution. There should be no doubt that the climate emergency has been or will be compared to many other events you and I won’t imagine until that comparison is made.

Third, earthquakes with unimaginable impacts are predicted all the time. That in fact is the genre convention. It’s no different than predicting that experience after my death will be the same as experience before my being conceived.

Thinking differently about implementation scenarios

The authors of a fine report concluded that significant gaps exist between what was proposed in the EU AI Act (concerning artificial intelligence) and the then existing EU digital legislation (formally “the digital acquis”):

We identify eight key areas where challenges may emerge, and make the following policy recommendations: . . .

https://www.ceps.eu/ceps-publications/the-ai-act-and-emerging-eu-digital-acquis/)

In my view, the first question we ask is not, “Who’s going to adopt the recommendations and, if so, with what modifications?” but rather: “Who would implement the finalized recommendations and what are implementors’ scenarios for failing to do so?” This acknowledges the longstanding role of implementation as de facto policymaking.

Thinking differently about pre-disaster mitigations

Retrofitting a bridge pre-disaster isn’t a chancy wager on what might or might not happen to the bridge. Retrofitting is managing latent interconnectivities between bridges and other infrastructures that become manifest during and immediately after the disaster. That inter-infrastructural connections will shift and these shifts will involve bridges is far more predictable than this or that bridge will fail, unless retrofitted.

This means attention is crucial to the track record in retrofitting bridges before and after disasters, here and elsewhere. Note the implication: Retrofitting has to occur in order to have a track record to monitor and learn from.

Since there are real material and cognitive limits on controlling inter-infrastructural connectivity at any point in time, doing more by way of managing the pre-disaster latency of interconnectivities is elemental. Bringing water systems up to power’s better practices is a way of managing latent interconnectivity in advance of disaster, one interviewee told us.

Thinking differently about risk management for crises

I

What to make of the debacle of Sam Bank-Friedman’s cryptocurrency firm?

“I wasn’t even trying, like, I wasn’t spending any time or effort trying to manage risk on FTX,” Mr. Bankman-Fried said. Echoed a co-head of digital asset trading in Citigroup about FTX, “The thing that I picked up on immediately that was causing us heartburn was the complete lack of a risk-management framework that they could articulate in any meaningful way.”

Before it was the wrong framework for managing risks; now the problem is having no framework at all. But how could FTX not have risk managers, albeit of sorts and not formalized?

II

In answer, let’s recast the issue. Risk and risk managers were around long before risk management frameworks and registries had been formalized. Think Christians being around from the time of Jesus to the time of formalizing the Scriptures in 4th century AD at the Council of Nicaea, How did Christians operate in the 300 years between? Can we think of Bank-Friedman and his FTX colleagues (and other cult-entrepreneurs) in the same way as these early Christians?

No wonder the guardians of current frameworks might want to convince us the FTX debacle has nothing to offer by way of lessons learned.

Thinking differently about crisis leadership

The literature on crisis leadership is largely top down (leaders direct) or bottom up (self-organizing crisis response), where networks are said to be vertical (hierarchical) or horizontal (laterally interacting).

We add a third category: control rooms, and not just in terms of Incident Command Centers during the emergency but already-existing infrastructure control rooms whose staff continue to operate during the emergency.

Paul Schulman and I argue control rooms are a unique organizational formation meriting society protection, even during (especially during) continued turbulence. They have evolved to take hard systemwide decisions under difficult conditions that require a decision, now. Adding this third is to insist on real-time large-system management as the prevention of major failures and thus crises that would have happened had not control room managers, operators and support staff prevented them.

Thinking differently about predictions

As I remember the to-ing and fro-ing over the introduction of Bt cotton in India, saving on insecticides was the putative plus and runaway GM crops the putative negative. I know nothing about the subsequent record but suspect that actual findings must have been differentiated, as any such findings, by region and other demographics.

All this came back to me when I read the following passage describing a conference paper on Bt cotton:

Ambarish Karamchedu presented on Dried up Bt cotton narratives: climate, debt and distressed livelihoods in semi-arid smallholder India. Proponents of this ‘technical fix’ position GMO crops as a triple win. India has semi-arid and arid areas where rural poverty is concentrated, with an intense monsoon season (3-4 months), making farming a challenge. BT cotton introduced around 1995, thrives here. India is the biggest cotton cultivator and Bt cotton is grown by 7 million smallholder farmers, 66 percent in semi-arid areas with poor soils and low rainfall prone to monsoon. In Telangana, 65% of farmers across all classes produce BT cotton, with good harvests for 5 years, after which they decline. Failure of farmers who face increased input prices have to resort to non-farm incomes. The triple win technological fix narrative perpetuates and exacerbates the problems it seeks to solve, and benefits farmer institutions rather than enriching farmer knowledge and practice.

https://drive.google.com/drive/folders/1VfvjJlxB9VPKQj55dNbZ_VH6oPi2IEVd

It’s that “with good harvests for 5 years, after which they decline” that grabbed my attention. Did anyone predict that for Bt cotton?

This matters, because in the absence of any such prediction, why not also conclude: “Well, five years is five years more than the critics expected?”

Thinking differently about luck in infrastructure crises

Ensuring systemwide service reliability has always involves a measure of luck in major critical infrastructures. Control room operators will tell you this. At its most abstract, good luck can be defined as the non-occurrence of system failure in the absence of exercising failure avoidance options, while bad luck is the occurrence of failure in the presence of exercising those options.

But luck also favors the well-prepared, and well-prepared operators make a difference. Consider how a senior operations engineer for a transmission grid described a close call to us:

. . . We nearly caused a voltage collapse all over the western grid. Everything was going up and down, we were trying to get power from all the nuclear units in the western grid. Life flashed before our eyes. And then the gen dispatcher did intuitively the right thing. He said, Shut one pump down. How he figured that, I still don’t understand. It was something we had never seen before. We had no procedures. . .We went back and looked at it, and the planner said, Oh yeah, you should never have been running three pumps, and we said, Where did you get that from? So we started writing new procedures.

When talent meets opportunity, the value added by professionals is stopping close calls and near misses from tripping into system failures. That there can be no guarantees makes it luck.

When regulation renders the noun, safety, into the adverb, unsafely


1. From a high reliability management perspective, regulation for safety in large socio-technical systems is dispersed. The regulation of critical infrastructures for system safety is not just what the regulators do; it is also what the infrastructures do in ways that their regulator of record can’t do on its own. Those who have the real-time information must fulfill regulatory functions with respect to system safety that the official regulator is not able to fulfill.

2. The dispersed functions of regulations for system safety put a premium on understanding real-time practices of control room operators and field staff in these large systems. Safety, if it is anything, is found in practices-as-undertaken, i.e., “it’s operating safely.” This means safety is best understood more as an adverb, not as a noun. You can no more make safety than you can make fish from fish soup.

3. It makes little sense then for critics to conclude that regulators are failing because formal regulations are not being complied with, if the infrastructures are managing in a highly reliable fashion and would not be doing so if they followed those regulations to the letter. In practical terms, this means there is not just the risk of regulatory non-compliance by the infrastructure, there is also the infrastructure’s risk of compliance with incomplete regulations.

4. Another way to put such examples is that, when it comes to managing safely, there is a major difference between error avoidance and risk management. Not taking advantage of opportunities to improvise and communicate laterally is a known error to avoid in immediate emergency response. Unlike risks to be managed more or less, these errors are to be avoided categorically, yes or no. What is most important about error avoidance is missing those real opportunities that shouldn’t or can’t be missed where the logic, clarity and urgency of “this is or is not responding safely” are evident.

5. If points 14 hold, the challenge then is to better understand the institutional niche of critical infrastructures, that is, how infrastructures themselves function in allocating, distributing, regulating and stabilizing system safety (and reliability) apart from the respective government regulators of record.

6. With that in mind, turn now to the relationship between system risk and system safety, specifically: regulating risk in order to ensure system safety. For some, the relationship is explicit, e.g., increasing safety barriers reduces risk of component or system failure.

In contrast, I come from a field, policy analysis and management, that assumes safety and risk are to be treated differently, unless otherwise shown in the case at hand. Indeed, one of the founders of my profession (Aaron Wildavsky) made a special point to distinguish the two. The reasons are many for not assuming that “reduce risks and you increase safety” or “increase safety and you reduce risks.” In particular:

However it is estimated, risk is generally about a specified harm and its likelihood of occurrence. But safety is increasingly recognized, as it was by an international group of aviation regulators, to be about “more than the absence of risk; it requires specific systemic enablers of safety to be maintained at all times to cope with the known risks, [and] to be well prepared to cope with those risks that are not yet known.”. . .In this sense, risk analysis and risk mitigation do not actually define safety, and even the best and most modern efforts at risk assessment and risk management cannot deliver safety on their own. Psychologically and politically, risk and safety are also different concepts, and this distinction is important to regulatory agencies and the publics they serve. . . .Risk is about loss while safety is about assurance. These are two different states of mind.“

Danner and Schulman, 2019

7. So what?

That informed people continue to stay in earthquake zones and sail in stormy seas even if they can move away from both tells you something about their preferences for system safety, let alone personal safety. For it is often safety with respect to the known unknowns of where they live and work versus safety with respect to unknown-unknowns of “getting away.” Unknowns, not risks.


Let’s shift gears to a different example and extension.

On March 26, 2024, about 0129 eastern daylight time, the 984-foot-long Singapore-flagged cargo vessel (containership) Dali was transiting out of Baltimore Harbor in Baltimore, Maryland, when it experienced a loss of electrical power and propulsion and struck Pier 17, the southern pier that supported the central span of the continuous through-truss of the Francis Scott Key Bridge. A portion of the bridge subsequently collapsed into the river, and portions of the pier, deck, and truss spans collapsed onto the vessel’s forward deck (see figure 1). . .

. . . The Key Bridge’s pier protection was struck in 1980 when the 390-foot-long Japan-flagged containership Blue Nagoya, which had a displacement or weight about one-tenth that of the Dali, collided with Pier 17 following a loss of steering about 600 yards from the bridge; see figure 4 for a size comparison of the Blue Nagoya to the Dali. . .

From the Marine Investigation Report for this accident (accessed online at https://www.ntsb.gov/investigations/AccidentReports/Reports/MIR2510.pdf)

Even I was taken aback by the two figures, and I study this stuff! Just look at the differences in containership sizes and you’d think even more disasters must lie in waiting wherever such infrastructures have not grown in size and scope relative to the demands placed on them.

Now, of course, there are those who would blame my perceptions on all those distorting cognitive biases–anchoring, salience, selection–as if they were trained incapacities on my part. But people, we’ve learned to worry about problems where physical capacity of infrastructures do not grow with their physical demand!

Even though true, that point doesn’t go far enough. The more important point is the empirical insight from the high reliability literature: A complex sociotechnical system is reliable only until its next failure. That is, we need to know more about how the current system is managed in real time beyond its technology and design in order to avoid failures. How managing safely means more than regulating for safety.

Or in case of the tanker, we need to know, inter alia, how experienced harbor pilots bringing the tankers into port manage these tankers under those current conditions (see a pilot’s perspective on the accident at https://theconversation.com/ive-captained-ships-into-tight-ports-like-baltimore-and-this-is-how-captains-like-me-work-with-harbor-pilots-to-avoid-deadly-collisions-226700). I mention harbor pilots because their definitions of a “near miss”–which they’ve experienced–and my definition of near miss–just look at how close the tanker’s antennae are to figure 4!–vary significantly.

This difference may well be more than “What are to me very, very narrow safety buffers are to them manageable ones.” I haven’t studied harbor pilots, but the infrastructure operators we studied distinguish between uncertainties to be managed and unstudied conditions in which not to operate. Operators we’ve talked with call this their “comfort zone,” though as one control room supervisor hastened to add, “I’m always uncomfortable.” How so is illustrated in the following table:

High reliability management professionals we study practice vigilance to stay out of the red area below the diagonal and stay within the area above it—a stylized version of their comfort zone. To maintain this level of comfort they tolerate some uncertainty about outcomes (Cf) matched by having high confidence in some probabilities (Pf). They also tolerate some uncertainty about probabilities by having higher confidence that consequences are limited. Management within these uncertainties is in either case supported by team situation awareness in the control room. In other words, the professionals seek to avoid unknown unknowns by extending but limiting their management to known unknowns—uncertainties with respect to outcomes and probabilities they can tolerate as part of their comfort zone as managers of operational risks and safety (Roe and Schulman 2018).

This is a very important distinction for managing safely in other critical infrastructures. Are there such reliability professionals when it comes, say, to “AI safety” (more formally, when it comes to the adverbial properties of performing safely or not)?

Complex sociotechnical systems have by definition complex technical cores, about which real-time operators do not have full and complete causal knowledge. So too by extension opaque AI algorithms are a concern, but not a new concern. Unstudied and unstudiable conditions have always been an issue under mandates for the safe and continuous provision of a critical infrastructure’s service in the face of various and variable task environments. The key issue then is: What uncertainties with respect to probabilities, uncertainties and consequences are they managing for when it comes to “AI safety” so as to avoid operating (acting, performing) ignorantly? Or more formally, when does avoiding error in real time require more than regulating for the management of risks?


Sources

Danner, C., and P. Schulman (2019). “Rethinking risk assessment for public utility safety regulation.” Risk Analysis 39(5): 1044-1059.

E. Roe and P.R. Schulman (2008). High Reliability Management. Stanford University Press, Stanford CA.

————————- (2018). “A reliability & risk framework for the assessment and management of system risks in critical infrastructures with central control rooms. Safety Science 110 (Part C): 80-88.

To my knowledge, philosophers Gilbert Ryle and Michael Oakeshott, are among those who first discussed the importance of recasting “thinking” and “behavior” in terms of adverbs rather than as common nouns.

Some safety extensions from the high reliability literature

On March 26, 2024, about 0129 eastern daylight time, the 984-foot-long Singapore-flagged cargo vessel (containership) Dali was transiting out of Baltimore Harbor in Baltimore, Maryland, when it experienced a loss of electrical power and propulsion and struck Pier 17, the southern pier that supported the central span of the continuous through-truss of the Francis Scott Key Bridge. A portion of the bridge subsequently collapsed into the river, and portions of the pier, deck, and truss spans collapsed onto the vessel’s forward deck (see figure 1). . .

. . . The Key Bridge’s pier protection was struck in 1980 when the 390-foot-long Japan-flagged containership Blue Nagoya, which had a displacement or weight about one-tenth that of the Dali, collided with Pier 17 following a loss of steering about 600 yards from the bridge; see figure 4 for a size comparison of the Blue Nagoya to the Dali. . .

From the Marine Investigation Report for this accident (accessed online at https://www.ntsb.gov/investigations/AccidentReports/Reports/MIR2510.pdf)

Even I was taken aback by the two figures, and I study this stuff! Just look at the differences in containership sizes and you’d think even more disasters must lie in waiting wherever such infrastructures have not grown in size and scope relative to the demands placed on them.

Now, of course, there are those who would blame my perceptiions on all those distorting cognitive biases–anchoring, salience, selection–as if they were trained incapacities on my part. But guys, we’ve learned to worry about problems where physical capacity of infrastructures do not grow with their physical demand!

Even though true, that point doesn’t go far enough. The more important point is the empirical insight from the high reliability literature: A complex sociotechnical system is reliable only until its next failure. That is, we need to know more about how the current system is managed in real time beyond its technology and design in order to avoid failures.

Or in case of the tanker, we need to know, inter alia, how experienced harbor pilots bringing the tankers into port manage these tankers under those current conditions (see a pilot’s perspective on the accident at https://theconversation.com/ive-captained-ships-into-tight-ports-like-baltimore-and-this-is-how-captains-like-me-work-with-harbor-pilots-to-avoid-deadly-collisions-226700). I mention harbor pilots because their definitions of a “near miss”–which they’ve experienced–and my definition of near miss–just look at how close the tanker’s antennae are to figure 4!–vary significantly.

This difference may well be more than “What are to me very, very narrow safety buffers are to them manageable ones.” I haven’t studied harbor pilots, but the infrastructure operators we studied distinguish between uncertainties to be managed and unstudied conditions in which not to operate. Operators we’ve talked with call this their “comfort zone,” though as one control room supervisor hastened to add, “I’m always uncomfortable.” How so is illustrated in the following table:

High reliability management professionals we study practice vigilance to stay out of the red area below the diagonal and stay within the area above it—a stylized version of their comfort zone. To maintain this level of comfort they tolerate some uncertainty about outcomes (Cf) matched by having high confidence in some probabilities (Pf). They also tolerate some uncertainty about probabilities by having higher confidence that consequences are limited. Management within these uncertainties is in either case supported by team situation awareness in the control room. In other words, the professionals seek to avoid unknown unknowns by extending but limiting their management to known unknowns—uncertainties with respect to outcomes and probabilities they can tolerate as part of their comfort zone as risk managers (Roe and Schulman 2018).

This is a very important distinction for safety management in other critical infrastructures. Are there such reliability professionals when it comes to AI safety? Complex sociotechnical systems have by definition complex technical cores, about which real-time operators do not have full and complete causal knowledge. So too by extension opaque AI algorithms are a concern, but not a new concern. Unstudied and unstudiable conditions have always been an issue under mandates for the safe and continuous provision of a critical infrastructure’s service in the face of various and variable task environments. The key issue then is: What uncertainities with respect to probabilities and consequences are they managing for when it comes to AI safety so as to avoid operating ignorantly?


Source.

E. Roe and P.R. Schulman (2018). “A reliability & risk framework for the assessment and management of system risks in critical infrastructures with central control rooms. Safety Science 110 (Part C): 80-88.

And-yet counternarratives

The paragraph I’ve just read is immediately bookended by two quotes:

Just before: “Therefore, rather than being schools of democracy, ACs [associative councils] may be spaces where associative and political elites interact and, therefore, just reproduce existing political inequities (Navarro, 2000). Furthermore, these institutions may have limited impact in growing and diversifying the body of citizens making contributions to public debate (Fraser, 1990).”

Just after: “The professionalised model results from a complex combination of inequalities in associationism and a specific type of participation labour. Analysing the qualitative interviews, regulations and documents was fundamental to understanding the underlying logic of selecting professionals as the main components.”

Now try to guess the gist of the paragraph in between. More of the same? Well, no. Six paragraphs from the article’s end emerges an “and-yet” that had been there from the beginning:

Nevertheless, an alternative interpretation of professionalisation should be considered. The fact that ACs perform so poorly in inclusiveness does not mean that they are not valuable for other purposes, such as voicing a plurality of interests in policymaking (Cohen, 2009). In this respect, participants can act as representatives of associations that, in many cases, promote the needs of oppressed and exploited groups (De Graaf et al., 2015; Wampler, 2007). Suffice it to say, for example, that labour unions or migrants’ associations frequently send lawyers or social workers to ACs to defend their needs and positions. Problems with inclusion should not take away from other purposes, that is, struggles to introduce critical issues and redistribution demands to the state agenda. Other studies have already shown that groups make strategic decisions to achieve better negotiation outcomes in the context of technical debates (Grillos, 2022). Thus, the choice of selecting professionals can be a strategy to improve the capacity of pressure in institutional spaces dominated by experts. (my bold; accessed online at https://journals.sagepub.com/doi/10.1177/00323217251319065)

Methodological upshot: What the counterfactual is to economic analysis, the and-yet counternarratives are to policy analysis. What would have happened in the former is contrasted to what else is happening during the latter.