Not to jointly improvise is an error when shared improvisation is real-time interconnectivity between failed systems

–Given the centerstage that improvisational behavior has in establishing and maintaining interconnectivity between infrastructures during shocks, it is unreasonable to assume that infrastructure operators and emergency managers can un-learn and un-experience cases where even the best plans did not mitigate the disaster as it unfolded.

–“I can’t say enough good things about planning and how important it is,” a state emergency manager in the Pacific Northwest told us.

“But you realize the gaps in plans when you’re dealing with such catastrophic events that we’ve dealt with in the past 18 months to two years…There’s a lot that needs to be decided on the fly because it hasn’t been planned for or it’s not going to work, the plan didn’t consider all the factors because every emergency is different”.

–The preceding seems obvious, but the implications aren’t as readily recognized. For one thing, it implies that there isn’t a “life cycle” of a critical infrastructure, if by that is meant one stage follows another until it, the single now-mature infrastructure, is superseded by something better. Infrastructures are like other complex organizations in that they shift in response to changes in their wider task environments, include sudden changes. That again implies the centrality of improvisation.

–“What does success look like?” a senior state emergency manager asked rhetorically, and answered: “Success in every disaster is that you didn’t have to get improvisational immediately. You can rely on prior relationships and set up a framework for improvisation and creativity.”

Success, in other words, is when pre-existing interconnectivity between critical infrastructures does not altogether disappear, however reconfigured. Otherwise, the interconnectivity would have to be improvised by micro-coordinating to match just-now demands with just-now capabilities. It is a shared error not to recognized the latter for emergency preparedness and management.

Could-ism and may-ism

Could-isms

Our expert-interview exercise with leading thinkers on the topic revealed how climate technologies can potentially propagate very different types of conflict at different scales and among diverse political actors. Conflict and war could be pursued intentionally (direct targeted deployment, especially weather-modification efforts targeting key resources such as fishing, agriculture, or forests) or result accidently (unintended collateral damage during existing conflicts or even owing to miscalculation). Conflict could be over material resources (mines or technology supply chains) or even immaterial resources (patents, soft- ware, control systems prone to hacking). The protagonists of conflict could be unilateral (a state, a populist leader, a billionaire) or multi- lateral in nature (via cartels and clubs, a new “Green OPEC”). Research and deployment could exacerbate ongoing instability and conflict, or cause and contribute to entirely new conflicts. Militarization could be over perceptions of unauthorized or destabilizing deployment (India worrying that China has utilized it to affect the monsoon cycle), or to enforce deployment or deter noncompliance (militaries sent in to protect carbon reservoirs or large-scale afforestation or ecosystem projects). Conflict potential could involve a catastrophic, one-off event such as a great power war or nuclear war, or instead a more chronic and recurring series of events, such as heightening tensions in the global political system to the point of miscalculation, counter-geoengineering, permissive tolerance and brinksmanship. . . .

States and actors will need to proceed even more cautiously in the future if they are to avoid making these predictions into reality, and more effective governance architectures may be warranted to constrain rather than enable deployment, particularly in cases that might lead to spiralling, retaliatory developments toward greater conflict. After all, to address the wicked problem of climate change while creating more pernicious political problems that damage our collective security is a future we must avoid.

https://www.sciencedirect.com/science/article/pii/S2211467X22002255 (my bolds)

Let’s be clear: All such “could’s-as-possibilities” do not add up to one single “must-as-necessity.”

May-isms

Intuitively, stronger interactions between systems may be expected to increase the numbers of drivers of any one system, change driver behaviour and generate more system noise. As a result, we would anticipate that higher levels of stress, more drivers and noise may bring forward threshold-dependent changes more quickly. For any particular system (for example, the Amazon forest) it is possible to envisage a time sequence that starts with one main driver (for example, deforestation), then multiple drivers (for example, deforestation plus global warming), more noise through extreme events (for example, more droughts and wildfires), with additional feedback mechanisms that enhance the drivers (for example, diminished internal water cycle and more severe droughts). A vortex could therefore emerge, with drivers generating noisier systems as climate variability and the incidence of extreme events increases.

https://www.nature.com/articles/s41893-023-01157-x (my bolds)

Yes, that may, might, could happen. Or not.

Upshot

The only way “could or may” leads to “must” would mean that each article and like ones began with “must avoid this or that” and then proceeded to demonstrate how to undertake really-existing error avoidance with respect to the could-events and might-be’s.

Regulating AI models

Regulators should also ensure that AI products and services compete on a level playing field with non-AI products and services, including human-provided services. Sectoral regulations on liability, professional licensing, and professional ethics should apply equally as is appropriate to both AI and non-AI solutions. For instance, hiring decisions and credit decisions must be subject to the same rules against discrimination and bias, no matter whether they are made by humans or AI. Likewise, financial advice should be subject to similar kinds of regulation regardless of whether it is provided by humans or AI. This may reduce AI use in some sectors, and simultaneously avoid the degradation of service standards through the use of AI. . . .

There are cases where human workers are penalized for discounting the analysis of an AI solution in their workplace, creating a lopsided liability burden. For instance, nurses in some US hospitals can disregard algorithmic assessment of a patient’s diagnosis with doctor approval but face high risks for such disregard as they are penalized for overriding algorithms that turn out to be right. This may lead nurses to err on the side of caution and follow AI solutions even when they know they are wrong in a given instance. . .While these are private penalties upheld by hospital administrations, there has been at least one case where a nurse was held responsible by an arbitrator for a patient’s death because she did not override an algorithm. The arbitrator held that she was pressured by hospital policy to follow the algorithm, and thus her employer was directed to pay damages to the patient’s family.

To avoid situations where humans defer to AI against their better judgment, liability frameworks should be neutral to ensure that technology follows sectoral regulation and not the other way round. AI technology should not be applied in circumstances in which it does not meet regulatory standards.

Vipra, J. and A. Korinek (2023). Market Concentration Implications of Foundation models: The invisible hand of ChatGPT. Center on Regulation and Markets Working Paper #9, Center on Regulation and Markets at Brookings, Brookings Institution, Washington DC. (Accessed online at https://www.brookings.edu/tags/center-on-regulation-and-markets-working-papers/)

I agree with the above passage, at least as far as the authors go. The problem is they do not go far enough. Namely, the regulation of AI foundation models must also go beyond the regulator standards of the regulator of record. This most certainly holds when the AI models are integrated into the sectors and infrastructures focused on by the authors.

The key to understanding how the authors stop short of going far enough is their own example of the nurse. The nurse is actually a reliability professional, one of whose functions is to correct for regulatory error or lapses in standards. Irrespective of problems with any specific AI algorithm, no one can or should expect regulatory standards to be all-covering or comprehensive at any point of time in such dynamic fields as healthcare.

In practical terms, this means there is not just the risk of regulatory non-compliance by real-time professionals, like nurses; there is also the risk of compliance with defective regulations. Either way, the importance of time from discovery to correction of error reinforces the nature of dispersed regulatory functions beyond that of the regulator of record.

Snatching defeat from the jaws of victory

Below is the full abstract from an excellent review of case material on river restoration in Newfoundland and Labrado, Canada:

Despite millions of dollars being spent annually to restore degraded river ecosystems, there exist relatively few assessments of the ecological effectiveness of projects. An evidence-based synthesis was conducted to describe river restoration activities in Newfoundland and Labrador. The synthesis identified 170 river restoration projects between 1949 and 2020. A practitioner’s survey was conducted on a subset of 91 projects to evaluate ecological success. When the perceived success of managers was compared to an independent assessment of ecological success, 82% of respondents believe the projects to be completely or somewhat successful whereas only 41% of projects were evaluated as ecologically successful through an independent assessment. Only 11% of practitioners’ evaluations used ecological indicators, yet managers of 66% of projects reported improvements in river ecosystems. This contradiction reveals a lack of the application of evidence to support value-based judgments by practitioners. Despite reporting that monitoring data were used in the assessment it is doubtful that any meaningful ecological assessment was conducted. If we are to improve the science of river restoration, projects must demonstrate evidence of ecological success to qualify as sound restoration. River restoration is a necessary tool to ensure the sustainability of river ecosystems. The assessment conducted in this study suggests that our approach to planning, designing, implementing, monitoring, and evaluating projects needs to improve. An integrated-systems view that gives attention to stakeholders’ values and scientific information concerning the potential consequences of alternative restoration actions on key ecosystem indicators is required.

Skinner, S. W., A. Addai, S. E. Decker, and M. van Zyll de Jong. 2023. The ecological success of river restoration in Newfoundland and Labrador, Canada: lessons learned. Ecology and Society 28(3):20. https://doi.org/10.5751/ES-14379-280320

My problem is not their main finding: “Despite reporting that monitoring data were used in the assessment it is doubtful that any meaningful ecological assessment was conducted.”

I can well believe it. What troubles me is that phrase, “only 41% of projects were evaluated as ecologically successful through an independent assessment.”

I don’t know what world the authors or you, the reader, live in. But in my world, a 40% success rate in river restoration projects is huge! I mean, really significant. Indeed, I come from a project implementation background where conjoining, “success” and “ecosystem restoration,” is like waving a red flag in front of a phalanx of critics at the ready to disagree.

This project victory (if indeed confirmed) is in no way to be characterized, I feel, as “only 41%”–a phrase repeated later in the body of the article.

When ignorance does more than you think

Unstudied conditions are avoided as vigilantly as possible—right now, when it matters—by control room operators of large critical infrastructures mandated to operate reliably and safely systemwide. Having failed to fail because an operator was behaving ignorantly is orthogonal to high reliability management.

–That said, ignorance has functions in large socio-technical systems—but in ways not captured by the happy-talk of trial-and-error learning and Experiment! Five under-recognized positives deserve highlighting:

(1) A longstanding proposition in organization theory has been that operators and managers cannot know everything and something like bounded rationality is required in order to undertake decisions and manage. More, a mandate for comprehensive decisionmaking would undermine reliability management at the complex system level, not enhance it.

It is in such senses that the operations of other large infrastructures with which a control room is interconnected are largely, if not completely, “unstudied conditions” for that control room. Real-time management by a control room is so knowledge-intensive that its operators cannot be expected to understand just intensively how the other interconnected infrastructures and their control centers operate.

(2) The comfort zone of control room operators includes managing nonmeasured or unmeasurable uncertainties so as to stay out of unstudied conditions—unknown unknowns—about which system operators are by definition ignorant.

The uncertainties are not denominated as calculable risk, but still operators may know more about consequences than likelihoods, or vice versa. Operators undertake uncertainty management because they can and do differentiate uncertainties.

(3) Large system control operators do innovate even within their comfort zone. We see their improvisation in control room assembly of options just-in-time under conditions of high volatility (high unpredictability or uncontrollability in the outside environment). In fact, the evolutionary advantage of control rooms lies in the skills and expertise of its operators to operationally redesign in real time what is otherwise inadequate technology or regulation.

There is a kind of learning-through-error-management going on, but the learners do so by avoiding having to test the limits of system survival. Professionals will not deliberately chance the first error becoming the last trial (i.e., trial-and-error as probe-and-explode).

Certainly the view–“It’s almost impossible to innovate if you’re not prepared to fail”–is orthogonal to the innovation-positive we observed in critical infrastructures.

(4) That said, some unknown-unknowns may be key to something like an infrastructure’s immune system for managing under variable risk and uncertainty.

The complex and interconnected nature of large socio-technical systems suggests that “low-level” accidents, lapses or even sabotage may be underway that systemwide reliability professionals–like control room operators and their support staff–do not (cannot) observe, know about, or otherwise appreciate. This is less “ignorance is bliss” than ignorance as the ability to put up with what is disliked without knowing it as reliability is maintained by other means.

(5) Last but not least: When unstudied conditions and unknown-unknowns are feared because of the awful consequences associated with behaving ignorantly, the ensuing dread promotes having to manage dangerous complex technologies more reliably and safely than theories of tight coupling and complex interactivity suggest. Wide societal dread of systemwide failure takes on a positive function in these cases, without which the real-time management of dangerous technologies would not be warranted, let alone warrantable.

(It’s at this point that someone complains I’m advocating “the manufacture of dread for the purposes of social control through taken-for-granted technologies.” Which is oddly unreflexive on their part if they really believe what they say, since the very infrastructures they criticize enable them to render such judgment, here and now.)

–The upshot of the five features is this. Notions of experimentation and innovation are recast in the face of unstudied conditions and they vary substantially from Experiment! Adapt! Be resilient! Each exhortation may be true as far as it goes, but each does not go far enough in specifying the with-respect-to-what desideratum.

Why BlackRock, not sovereign debt, is the global crisis

–Let’s start with a quote:

Zambia defaulted on interest payments to some of its private lenders in November 2020 when private creditors refused to suspend debt payments. In February 2021, Zambia applied for a debt restructuring through the Common Framework, but little progress has been made on the negotiations as large private creditors, such as BlackRock, have so far refused to reach an agreement on debt relief.

BlackRock, headed up by Larry Fink, is the largest of a number of bondholders who are refusing to cancel Zambia’s debt, despite lending to the country with interest rates as high as 9% (in comparison to wealthy countries like Germany, UK and USA who were given loans at 0-2% interest in the same time period) potentially making huge profits. Debt Justice estimates that BlackRock could make up to 110% profit if repaid in full.

Meanwhile, Zambia is experiencing devastating impacts of the climate crisis such as flooding, extreme temperatures and droughts, which are causing significant disruption to livelihoods and severe food insecurity. Unsustainable debt levels mean the country lacks many of the resources required to address these impacts. This decade, Zambia is due to spend over four times more on debt payments than on addressing the impacts of the climate crisis.

https://debtjustice.org.uk/wp-content/uploads/2022/10/Debt-and-the-Climate-Crisis-Briefing-October-2022-UPDATED.pdf

Compared to BlackRock, only two nations, the USA and PRC, have GDPs greater than the wealth managed by BlackRock (whose recent assets are reported to be $10 trillion or more). It’s also said that the ten largest asset-management firms together manage some $44 trillion, roughly equivalent to the annual GDPs of the USA, PRC, Japan and Germany.

–Yes, of course, we still must say that the current sovereign debt crisis could be better managed. Fair enough.

It would be more accurate to say, however, that BlackRock is actually being managed in ways the sovereign debt crisis can’t. BlackRock does have a C-suite, right? Why then not start with BlackRock? After all, it rose to an undisputed shareholder superpower only after the last financial crisis of 2008.

–In other words, think of BlackRock as the global financial crisis currently underway and the “sovereign debt crisis” as the smoke-and-mirrors to get the rest of us to believe otherwise.

We know exactly who benefits from placing the blame on the Government of Zambia’s fiscal and monetary management, when the Global Behemoth BlackRock is managed even worse in terms of self-interest.

Instances of the importance of knowing what are mistakes under conditions of pervasive uncertainty

1. Rescuing error from uncertainty

I

How do you know you’ve made a mistake if caught in the grip of everything else being uncertain? You know more, of course, after the fact, when consequences are clearer. But how do you know in real time that this or that action is a mistake to be avoided in the fog of war or such like you find all around you?

It is important, I think, to insist that real-time error avoidance is still possible even under conditions of widespread systemwide complexity and uncertainties (and not just by way of later hindsight).

II

Paul Schulman and I recently undertook research on a set of interconnected critical infrastructures in Oregon and Washington State. We ended up focusing on key interconnectivity configurations, shifts, connected system control variables, and changing performance standards as operating conditions shifted from normal, through disrupted, into failed, then if possible into recovery and a new normal for the interconnected systems.

The upshot is that not only do major uncertainties and risks change with shifting interconnectivities, but new errors to be avoided emerge as well, and clearly so.

For staff in the interconnected critical infrastructures, there are conditions under which it is a shared error for infrastructure operators not to micro-coordinate by way of improvising and communicating laterally (not just up and down a chain of incident command). This holds even if (especially if) emergency response and initial service restoration are not guaranteed after an interinfrastructural shock.

III

So what?

I know I have been too casual in wielding about global descriptions of “systemwide uncertainty, complexity, and conflict.” Error avoidance, in contrast, can be a far better site indicator for policy and management on the ground.

2. It’s not about unfogging the future

–We are so used to the idea that predicting the future should not be erroneous that we forget how murky and unclear the present is. To paraphrase Turgot, the French Enlightenment philosopher and statesman, we have enough trouble predicting the present, let along any such future! Because the present is not one-way-only, why expect anything less for the future when it unfolds?

But there’s murky, and then there is knowing better the way and how of that “murky.”

–Since the complexity of policy and management means there is more than one way to interpret an issue, the more interpretations we acknowledge the less mistaken we are about that the nature and limits of that complexity. That is to say, we are clearer about why multiple scenarios are our aim, namely, ones that “enable us to reframe our current understanding of our environment, appreciating the power of uncertainty and its capacity to inspire fear and wonder” (Finch and Mahon 2023).

The operative term for me is “reframe the present,” again and again, and then see what sticks.

3. The crisis sequence as an abstraction

–The painter Gérard Fromanger noted that a blank canvas is ‘‘black with everything every painter has painted before me’’. If, as painter František Kupka felt, “to abstract is to eliminate,” then stripping away the layers of black-on-black is akin to abstracting blankness. One implication: There is nothing more abstract in the art of change than “wiping the table clean.”

–Yet those sheets of empty, clean paper held up high by the crowd? Make no mistake: It is bstracting what can’t be changed, right now.

–The chief risk manager is a curator of artifacts called risk scores for this or that part of the installations called critical infrastructure. Each score is akin to the Surrealists’ frottage, a smudged impression on a piece of paper by rubbing with a pencil or crayon over the uneven surface. It is a mistake to see the smudge as a mistake.

–Take a linear sequence—beginning/middle/end—and move that “middle.” You’ve made a triangle of any crisis scenario. Downwards, and it sags into a V, as when the story can never get out of in medias res. Shift outwards, and it is neither right nor left, but in front, like climate activists in the vanguard.

This happens, that happens, and that’s it. So said, the crisis sequence comes to sound like grudges passed off as threats. Now that’s a mistake.

Sources

M. Finch and M. Mahon (2023). “Facing the Strategic Sublime: Scenario Planning as Gothic Narrative” in Vector 297 (2023) at https://vector-bsfa.com/current-issue/

E. Roe and P.R. Schulman (2023). “An Interconnectivity Framework for Analyzing and Demarcating Real-Time Operations Across Critical Infrastructures and Over Time.” Safety Science online.

If utopias are premature truths

–Imagine two parallel worlds so alike that they would have been exactly the same, were it not for one line in Shakespeare’s Hamlet. One world has, “I am thy father’s spirit;” the other, “Ich bin dein Papas Spook.” The former world does not know of the latter line; nor the latter know the former’s.

So what? The point is that both readings and their respective commentaries lurk as possible, because Hamlet‘s complexities are multiple.

–Adorno starting an opera on Tom Sawyer, Picasso painting Buffalo Bill Cody, Sartre preparing a screenplay on Freud, Benjamin Britten facing the prospect of becoming a bandmaster (or Samuel Beckett a commercial airplane pilot), Coleridge and fellow poet Robert Southey planning an egalitarian community on shores of the Susquehanna, Goethe’s ideas to clean up the streets of Venice, Kafka drafting rules for a socialist workers’ cooperative, and Abraham Lincoln and Hedy Lamarr securing their respective patents–all are connected. By me.

Even though the concatenated clauses have not appeared before or been considered as such. You can’t test the real-time I make.

–We’ve over-invested in economic growth on the premise that it takes us further. But good-enough betterment goes even further. The 19th century French poet, Lamartine, wrote “Utopias are often just premature truths”—which is the shortcoming I have in mind: Stopping short at progress is premature. It lacks betterment’s yes-but grounded in the complexities now.

Sudden change that can’t be plagiarized

As if lives cannot suddenly and startlingly change for the better, but they can suddenly and shockingly change, sometimes irreversibly, for the worse.

Adam Phillips (2021). On Wanting to Change, p. 69

The epigraph is suggestive: Sample people–on this planet of 8 billion and more–whose lives have in fact suddenly changed for the better. What might they have to tell people who insist they know the next is as bad, and probably worse?

My own answer: Sudden positive change is a real-time performance than can’t be plagiarized by others. Negative change is photocopied all the time, everywhere. In Phillips’ terms: The former’s acknowledgement differs from the latter’s knowledge.

When new socio-technical systems are offered on the promise of being more reliable and sustainable, then having established a track record in preventing system failures becomes even more critical

At best, socio-technical systems are reliable only until the next failure ahead. This means that preventing that next failure is just as important for establishing a track record in failure prevention as future failures being prevented because, say, the systems are more sustainable than now.

Further, this is a track record of real-time system operators who manage reliably because they learn and unlearn. They are reliable because (not in spite) of learning that they didn’t know what they initially thought they knew, they in fact knew more than they had first thought, or both.

What a senior risk manager told us applies to the challenge of reliability management in key socio-technical systems, now and ahead: “Really, just because we haven’t had a meltdown doesn’t mean our practices were effective”.