. . . or barbarism (updated)

For a long time, we’ve had posed to us the binary choices of civilization or barbarism and socialism or barbarism. Later choices included progress or barbarism, capitalism or barbarism, liberalism or barbarism. More recent ones embrace: degrowth or barbarism, anti-Zionism or barbarism, anti-racist feminism or barbarism, climate justice or barbarism, and digital socialism or digital barbarism.

What an odd way to keep alive the notion that civilization persists through changing.

Economics assumes substitutability; high reliability practices, in contrast, assume nonfungibility (resent)

Economics assumes a theory of substitutability, where goods and services have alternatives in the marketplace; infrastructure reliability assumes practices for ensuring nonfungibility, where nothing can substitute for the high reliability of critical infrastructures without which there would be no markets for goods and services, right now when selecting among those alternative goods and services. There is a point at which high reliability and trade-offs are immiscible, like trying to mix oil and water.

One way of thinking about the nonfungibility of infrastructure reliability is that it’s irrecuperable economically in real time; it cannot be cashed out in dollars and cents in the here-and-now without it becoming different from high reliability. Real time, from this perspective, is an impassable obstacle to monetizing tasks by infrastructure control room operators, then and there, when undertaking the managing.

Which is to say, if you were to enter the market and arbitrage a price for high reliability of critical infrastructures, the markets transactions would be such you’d never be sure you’re getting what you thought you were buying.

Time after time, contradictions must be placed. . .

. . . at the very center of analysis. The idea that current capitalist pathologies arise because the capitalism of markets and productivity has disappeared. The idea that Amazon should be broken up because of its monopoly power and anti-competitive practices, as professed by a think tank whose goal is nothing like a competitive market society. The idea of Eurocommunist parties that the working class isn’t to be praised but contested by, among others, middle class workers.

Time after time, making human behaviour more predictable for the client of prediction (the manager, the police officer) often means making life and work more unpredictable for the target of prediction (the employee, the urban citizen).

https://journals.sagepub.com/doi/10.1177/20539517231171053

Meanwhile I’ll be plotting to outwrite it; I want to be the first human being to imitate ChatGPT perfectly.

https://www.theguardian.com/books/2023/may/20/m-john-harrison-i-want-to-be-the-first-human-to-imitate-chatgpt-wish-i-was-here?utm_term=6469c17386bac9427580944744a8948a&utm_campaign=Bookmarks&utm_source=esp&utm_medium=Email&CMP=bookmarks_email

Large proportions of the Chinese collection are perhaps copies in the eyes of those collectors and dealers, who believe that authentic African art has become largely extinct due to diminishing numbers of active traditional carvers and ritual practices. However, the ideological structure and colonial history of authenticity loses its effects and meanings in China, where anything produced and brought back from Africa is deemed to be “authentically African”.

https://www.tandfonline.com/doi/full/10.1080/13696815.2021.1925089

E.M. Forster in Howards End: “The most successful career must show a waste of strength that might have removed mountains, and the most unsuccessful life is not that of the man who is taken unprepared, but of him who has prepared and is never taken.” This is as far away as you get from a perfect reliability as the fault-free fluency of classical ballet.

Examples of “contrary to what you think” in infrastructure studies

I keep being told infrastructures are complex technologies, even though they’re manifestly socio-technical and not just because the technologies have to be managed (i.e., this is more than “risk is socially constructed”).

We’re to believe regular operations are routine operations, but if routine means invariant, there is nothing invariant about normal infrastructure operations.

System reliability is probabilistic in the view of engineers, even though control room operators act deterministically, i.e., there’s a point at which system reliability cannot be traded off against other factors or else people would die.

I was assured that for reasons of tractability, the modeling of infrastructure operations has two stages, normal and failed. In actual practice, the temporary disruption of systemwide services–hardly ever modeled, let alone sufficiently–identifies highly relevant conditions for returning to normal operations or tipping into failure.

Engineers said the probability of infrastructure failure during post-disaster recovery of assets and operations was higher than the probability of failure during normal operations. Think: re-energizing line by line during a table-top Black Start exercise. Actually, nonmeasurable uncertainties–nothing like probabilities–are faced by operators post-disaster (the Black Start exercises for electric transmission infrastructure assume no asset destruction, as improbable as that is).

Consider the frequent “restore.” What’s it with respect to: interrupted services restored back to normal? Or services to be initially restored after major system failure? Or key equipment or facilities restored after a non-routine outage as part of normal maintenance and repair activities? Restore is one of the most ambiguous terms in infrastructure studies.

Realities I, II and III in emergency management (resent)

You are on one of the upper floors of a huge skyscraper, looking out on the morning. That is Reality I: You are the observing subject looking out at reality. After a point, you realize that dot in the distance is actually a plane headed toward you, this morning in the World Trade Center. That is Reality II: You become the object of reality, in that grip of the real, and no longer just observer.

There is, however, Reality III. This is of the air traffic controllers during 9/11. Neither the observer of the first reality nor the object of second, the professionals achieved the unprecedented without incident that day. They were instructed to land all commercial and general aviation aircraft in the United States—some 4,500 aircraft—and did so.

Without overdrawing the point, so too do we demand that professionals land those water, electricity, transportation, telecommunications, and many more critical services every day without major incident.

Not to worry: As for ChatGPT, they know what they’re doing

Recently, I came across the work of a graduate student, whose identity is charitable to keep hidden. The text was written rather well and included a critical review of the experimental studies conducted on one of my research topics: therefore, my name and the name of the colleague who worked with me on that line of research, Marco Marini, often appeared in the text, and consequently also in the final bibliography. The consultation of the references, however, had in store a few surprises. Among other entries, the following were recorded:

–Marini, M. (2013). When it’s better to choose the one you love: The effect of attractiveness biases in consumer choices. Judgment and Decision Making, 8(5), 476-485.

–Marini, M. (2019). How to get people to take risks? A choice-based measure of risk preference. PloS One, 14(1), e0209983. doi: https://doi.org/https://doi. org/10.1371/journal.pone.0209983

–Marini, M. (2019). Luring to a suboptimal option: The effect of payoff reduc- tion in a risky choice framing. Judgment and Decision making, 14(2), 198- 207.

–Marini, M. (2020). The asymmetrically dominated compromise effect in a dynamic setting. Journal of Economic Psychology, 76, 102-257.

–Paglieri, F. (2009). The attractiveness of decoys in economic contexts: An experimental investigation. Judgment and Decision Making, 4(4), 335-342.

Formally, this bibliography extract is flawless: the entries are correctly formatted according to the standards of the American Psychological Association (APA), the relevant information is all present, the articles are consistent with the topic of the student’s assignment, and the titles of the various contributions are, objectively, quite intriguing. The only problem is that… none of these publications exist!

The incident was neither a brave, subversive act of provocation (to demonstrate that university instructors no longer read carefully the written assignments of their students), nor a symptom of terminal stupidity in the student (only a very dumb cheater would try to falsify the references of the very same people tasked with evaluating their work): instead, it was the outcome of a naïve and inappropriate use of generative AI. The student, after writing the assignment themselves and inserting the appropriate references in the text, using the author-date APA standard, had incautiously asked ChatGPT to prepare the reference list, giving it their own text as part of the prompt. Unfortunately, the software compiled a bibliographic list in full compliance with APA standards, but without any attention to the truthfulness of the information included therein.

Here, however, we are not interested in the student’s misadventures, but rather in how ChatGPT produced its output, which was certainly not random: there is method to this madness. Firstly, the journals in which the fake contributions would have appeared are plausible, both thematically, and because Marini and I have already published in those venues in the past, or in very similar ones. Secondly, the volume numbers that are mentioned refer to issues that have indeed been released, and usually the numbering and year of publication match; in one case, the entire reference (PloS One, 14(1), e0209983. doi: https://doi.org/10.1371/journal.pone.02099 83) refers to an existing article, except that it is a study on a completely different topic, i.e. gender barriers in research at the South Pole (Nash, M., Nielsen, H., Shaw, J., King, M., Lea, M.-A., & Bax, N (2019), “Antarctica just has this hero factor…”: Gendered barriers to Australian Antarctic research and remote fieldwork).

The inconsistencies that emerge upon closer inspection are also revealing: the 2020 article attributed to Marini is listed as appearing between page 102 and page 257, except that there never was a single 155-page long contribution published in that particular journal, and probably not even in others, at least in the field of economic psychology; delving deeper, one discovers that the Journal of Economic Psychology, from 2020 onwards, no longer reports the page numbers of individual articles, but only their identification number, which is composed of a 6-digit code starting with 102, and the code 102257 (that ChatGPT creatively transformed into page numbers, 102–257) corresponds to the editorial of the issue following the one cited in the invented bibliographic reference.

At other times, the system falls prey to ambiguities of meaning: the decoy effect, which was the main focus of the student’s paper, is also referred to as the attraction effect in the literature, and the word “attraction” evokes the semantic field of affects, which instead has nothing to do with the technical phenomenon in question (i.e., a shift of preferences towards an option that is manifestly superior to another inserted ad hoc, called decoy). It is because of this semantic ambiguity that ChatGPT came up with a title like “When it’s better to choose the one you love: The effect of attractiveness biases in consumer choices” – a wonderful title, by the way, which I will certainly use, as soon as the opportunity presents itself.

In short, this false output is not due to anomalies or errors in the functioning of the software, but on the contrary it illustrates perfectly what ChatGPT is built to do (and does very well): generate linguistic strings (in this case, bibliographic entries) that have the maximum probability of satisfying the user’s request, based on similar instances present in the (huge) database to which the program had access during training. What ChatGPT does not do, and cannot do due to the way it functions (at least for the time being), is consulting the real world or an internal representation of it: the system does not work by checking the state of the world and describing it, but rather by constructing responses that are maximally consistent with the vast mass of linguistic data at its disposal, whose adherence to reality is by no means guaranteed.

https://link.springer.com/article/10.1007/s13347-024-00743-x (footnote deleted for readability)

I don’t know about you, but I find that phrase–“a wonderful title, by the way, which I will certainly use, as soon as the opportunity presents itself”–also kind of worrying.

When surprise is a policy optic

It’s long been recognized that large complex systems are surprising even to their managers and real-time operators. The unexpected often happens. Even the most experienced operators, who say little shocks them by this point, find themselves wondering how this happened, now.

This is a very suggestive finding in my view. The financing and construction of homes and flats in the San Francisco Bay Area is a complex housing sector. All manner of politicians, regulators, investors, advocacy groups, developers, jurisdictions, localities and residents interact, and this unsettled and unsettling variety is itself often pointed to as proof-positive of the complexity. In this sector, everyone has stories to tell about the unexpected I’ve been told.

What if then we recast the stakeholder complexity in terms of the surprises experienced by those involved? Instead of housing prices going, we talk about: COVID comes, things shut down including construction, and yet the price of lumber skyrockets in ways that shock even those in the know (think supply chain interruptions).

So what? This implies unexpected ripple effects by way of inter-linked surprises, which in turn raise at least one methodological question (surprising to me, anyway): When it comes to this construction sector in the Bay Area, is it better to say we have networks or communities of surprises?

The energy transition as a different conversion story

If average global temperature rises are to be limited in line with the 2015 Paris agreement, climate finance globally will need to increase to about $9tn a year globally by 2030, up from just under $1.3tn in 2021-22, according to a report last year from the Climate Policy Initiative.

https://www.ft.com/content/6873d96e-3e40-45c6-9d84-8ce27b7b23e1

The article from which the above quote is extracted is written as if it were a quest story with beginning, middle and end by way of such funding. In reality, it is a conversion story of before and after a revelation.

For my part, I like my conversion stories upfront: “Any socialist effort to navigate the very real state shift in the climate will require a massive reconstruction and deployment of productive forces. For example, all the major cities that are on a coastline on this planet will have to be moved inland. That means the electrical grids and sewer systems need to be rebuilt. We will need to reimagine urban life on a massive scale. It’s not wrong to point that out.” (https://www.the-syllabus.com/ts-spotlight/the-right-climate/conversation/jason-moore)

Nor do you need to be socialist to hear all kinds of people insisting we need to change the very way we live, $9tn dollars a year or not. For us, it is to be as in: Saul the Jew before; Paul the Apostle after.

The one great virtue of being blunt is of course patent: It’s clear all manner of blunders, contingencies, not-knowing, and inexperience will be incurred in this drastic march from the sea. Where in this article are the parallel mistakes, accidents etcetera in wait for the $9tn per annum?

Source

For more on conversion narratives, see Adam Phillips (2022). On Wanting to Change. Picador Paper

Rescuing error avoidance from risk management under emergency conditions

Introduction

How do you know you’ve made a mistake if caught in the grip of everything else being uncertain? You know more, of course, after the fact when consequences are clearer in hindsight. But how do you know in real time and in these fogs of struggle and strife that this or that action on your part is a mistake to be avoided, right now and here?

It is highly relevant for the purposes of policy and management to insist that real-time error avoidance is possible even under particular (but not all) conditions of widespread systemwide complexity and uncertainties.

Research Findings

I

Paul Schulman and I have been undertaking research on a set of interconnected critical infrastructures in Oregon and Washington State. The upshot is that not only do major uncertainties and risks change with shifting interconnectivities, but new errors to be avoided emerge as well, and clearly so for some cases.

Based on interviews with infrastructure control room operators and emergency managers, real-time surprises are widespread in flooding, wildfire, road and other transportation disruptions, levee breaches, and transmission failures in electricity and water.

But, as many also told us, there can be and often are an urgency, clarity and logic about what to do by way of just-in-time or just-for-now interventions. What needs to be done is at times evident to front-line infrastructure staff and emergency management professionals, when not so to those in incident command centers or higher-level management or official positions. For these experienced front-line staff and in these circumstances, not doing what needs to be done constitute errors to be avoided in real-time. They are, in other words, opportunities that cannot be missed.

II

What are those circumstances and conditions of urgency, clarity and logic?

Ones identified by our interviewees focus on an infrastructure’s key interconnectivities with infrastructures they depended upon and which depended upon them. More specifically, this focus and concern centers around shifts in the interconnectivity involving their respective systemwide control variables, like frequency and voltage for electricity transmission, main-pipe pressures for large-scale water supply or natural gas systems, and network bandwidth in telecommunications.

During normal operations, these control variables are already interconnected. What defines system disruption and failure is when the interconnectivities shift in unignorable ways. Fire-fighters setting their firebreaks under more accessible rights-of-way, which are the same rights-of-way created for electricity transmission lines, can create conflict between backfires needed by the fire- fighters and the voltage and flow paths along the transmission lines. Because they share the same waterway, clearing a river passage for ongoing marine transport and re-opening a major port along the way is important to both infrastructures.

When these systems as systems are disrupted, or fail outright, restoring or recovering what had already been interconnected system control variables require urgent and often improvisatory behavior by all manner of infrastructure operators from the control rooms on down to field staff. These improvisations may be last-second one-offs saving the day, like seen in battle. From our perspective, these are better understood as part and parcel of the wide range of workarounds that line operators and field staff undertake–beginning in normal operations, routine maintenance and non-routine repair–to ensure safe and reliable operations at the system level.


III

In particular, we found:

–Under conditions of shifting or shifted interconnectivity, it would be an error for infrastructure operators and emergency managers not to establish lateral communications with one another and undertake improvisational and shared restoration activities where needed, even if no official arrangement exists to do so.

–In addition, there are also errors of anticipation and planning. It would be a management error in anticipation and planning not to provide robust and contingent interinfrastructure communication capabilities, including phone connections between the control rooms of interconnected infrastructures. This communication, it has been demonstrated, is also greatly facilitated by establishing lateral interinfrastructure personnel contacts prior to emergencies

–Further, it would be an error not to have some contingent resources for restoration and recovery activities such as vehicles, portable generators and movable cell towers in differing locations available across infrastructures if needed, particularly where chokepoints of interconnected infrastructures are adjacent to each other.

While these three errors are not the entire set, our interviews and prior research convince us that they are of primary and are to be avoided because they seriously degrade effective resilience in emergency prevention and responses.

Three Important Policy and Management Implications

I. Error avoidance is not risk management

Let’s start with a US example. It would be an error not to put into the mandated county/city hazard mitigation plan a proposal to replace a majorly vulnerable culvert with a new bridge, should the former be washed away in new flooding and when federal funds would be available for bridge replacement under those conditions. Put this way, there is a role for forward planning in anticipating and taking advantage of these already existing funding and construction opportunities.

Or from the other direction, rural town that did not anticipate accelerated gentrification after a major wildfire in its hazard mitigation plan will have to deal with the consequences of not having prepared for this gentrification (e.g. newly added residential water and wastewater demands and transportation requirements).

In both cases and from this perspective, the mandated hazard mitigation plan is a problem definition, parts of which are latent until activated during immediate emergency response, initial service restoration or longer-term recovery. Collapsing either example under the category of “risk management” is to miss the fact that these error (or, if prefer, missed opportunities) are not to be managed, more or less like risks, but rather managed categorically as yes or no. Did you avoid or did you not?

II. An example of how distinguishing between error avoidance and risk management is also important for locality residents affected by the disaster

Friends are telling us wonderful things about their recent move to a rural area in the Pacific Northwest. They were also surprised, given all the rain, about the high fire hazard risk mapped for their area and nearby environs. As in California, such maps created political and insurance company push-back. And there are methodological issues in mapping fuel loads in the absence of not knowing point-of-ignition information in advance.

So what to do? In their case, they talked about how they and neighbors agree in advance to help each other should a wildfire threaten (ignited, say, by vehicle sparks along the roadside). If one neighbor was threatened, all would move to that site to help out.

Such self-organizing happens all over the world and there is nothing extraordinary in this example, except one thing that deserves highlighting: What is going on here (and I suspect many other examples) is not managing the risks associated with fire hazards but rather avoiding known errors when faced with fire hazards, whatever the associated risks.

These errors include the aforementioned need for robust communications in this case among the neighbors and the need to have firefighting tools and associated equipment distributed and accessible beforehand. In addition, it is hoped that here too they and other residents use their county’s hazard mitigation plan to seek federal and state support for improving their lifeline infrastructures (water, electricity, roads and telecoms), should fires and other disasters actually undermine them in the future.

To repeat, it is an error to have missed really-existing opportunities for more robust communications, more dispersed equipment and tools, and greater use of existing planning and funding mechanisms. But why is that distinction important? It implies that there should dedicated support and staffing to assist such locality-based error avoidance, in addition to and separate from risk management efforts, not least of which being those fire hazard maps.

III. The special institutional niche for infrastructures in error-avoiding disaster management

Those who study major earthquakes, tsunamis, or other place-based catastrophes often remark about how populations left behind self-organize by way of saving lives and providing what relief they can on their own. We have seen this too. What is less recognized, I believe, is the institutional niche that critical infrastructures hold in this group adaptive behavior.

In some cases, the group-organization of groups takes place because there is little government presence, let alone as the disaster unfolds. One thinks of the media attention given to earthquakes in some low-income countries.

Self-organizing groups, however, are also observed in disaster situations that destroy longstanding critical infrastructures in middle to high-income countries. Error-avoiding behavior in the form of increased lateral communication and improvisational behavior are witnessed, in particular, among front-line infrastructure staff, emergency managers and some local communities.

I want to suggest that group adaptation in these latter cases differs in at least one under-acknowledged respect. A major part of that self-organization of field crews and the public is to provide initial restoration of some kind of electricity, water, road, communications and other so-called lifeline services, like medical care. This niche of critical infrastructures is already established.

Indeed, what better acknowledgement of society’s institutional niche for interconnected critical infrastructures than the immediate emergency response of trying to avoid all manner of errors in restoring the backbone infrastructures of electricity, water, telecoms and roads?

————-

Acknowledgement. My thanks to Paul Schulman for working through and crafting a number of these points. All errors–!–remain mine.

Reference. For an initial discussion of topics in this blog entry and its source material, see: E. Roe and P.R. Schulman (2023). “An Interconnectivity Framework for Analyzing and Demarcating Real-Time Operations Across Critical Infrastructures and Over Time.” Safety Science (available online at https://doi.org/10.1016/j.ssci.2023.106308)

An example of the high stakes in not thinking error avoidance is the same as “risk management” (major read)

I

Friends are telling us wonderful things about their recent move to a rural area in the Pacific Northwest. They were also surprised, given all the rain, about the high fire hazard risk mapped for their area and nearby environs. As in California, such maps created political and insurance company push-back. And there are methodological issues in mapping fuel loads in the absence of not knowing point-of-ignition information in advance.

So what to do? In their case, they talked about how they and neighbors agree in advance to help each other should a wildfire threaten (ignited, say, by vehicle sparks along the roadside). If one neighbor was threatened, all would move to that site to help out.

Such self-organizing happens all over the world and there is nothing extraordinary in this example, except one thing that deserves highlighting.

What is going on here (and I suspect many other examples) is not managing the risks associated with fire hazards so much as avoiding known errors when faced with fire hazards, whatever their risks. More formally, risks are to be managed more or less effectively in an emergency; errors in contrast are to be avoided—not more or less, but categorically yes or no. 

II

And what errors are these? They include the need for robust communications among the neighbors and the need to have firefighting tools and associated equipment distributed and accessible beforehand. This may or may not be in our friends’ case. In addition, it is hoped that they and other residents use their county’s hazard mitigation plan to seek federal and state support for improving their lifeline infrastructures (water, electricity, roads and telecoms), should fires and other disasters actually undermine them in the future.

Ensuring more robust communications, more dispersed equipment and tools, and greater use of existing planning and funding mechanisms are opportunities that must not be missed, if they already exist. In fact, it is an error not to take advantage of them.

And why is that distinction important? It implies that there should dedicated support and staffing to assist such locality-based error avoidance, in addition to and separate from risk management efforts, not least of which being those fire hazard maps.