A more productive urban bias when it comes to pastoralists?

In a very fine article on degrowth strategies in urban areas, “Strategic planning for degrowth: What, who, how,” Federico Salvini of the University of Amsterdam concludes:

The concepts of synergy and regionalization are already familiar in strategic planning theory. They stress that, to trigger strategic processes, it is essential for planners to grasp the existing landscape of prefigurative practices, directly engage with them, connect them through frames of meaningful interaction, and define a common understanding of the territory in which those practices coexist. In this paper, I argued that strategic spatial planning needs to go back to these two foundational processes to be able to address the extreme urgency of today’s social and ecological challenges (i.e., ecological breakdown and its related socio-political implications). Yet, it needs to do so by focusing on those practices that see reduction as imperative. In cities, these practices are increasingly common. Examples include housing cooperatives, ecological social housing, squats, community agriculture, food sovereignty, collective voluntary simplicity, and networks of care, education, and health.

(Accessed online at https://journals.sagepub.com/doi/full/10.1177/14730952241258693?__s=5c7iz8sjrdi0asw7ei91&utm_source=drip&utm_medium=email&utm_campaign=Your+Syllabus+This+Week)

Now, I don’t want to make too much of those degrowth “how-to-reduce” practices I’ve bolded in the above.

But aren’t many pastoralists and pastoralists areas still doing the equivalent–and long before degrowth moved to where it needs to be in the global policy agenda?

Why are most miracles about healing?

Let’s start with some of the big questions. How to maintain funding for the welfare system in a non-growing economy? How do we manage the increasing relative costs of welfare? Overcome structural and behavior growth dependencies within the welfare system? Transform the welfare state for the better?

Now the thought experiment. What if we “answered” these questions with an observation that only initially seems far afield and entirely off tangent? Namely: Unfree labor flows and flows of illegal funds, goods, and services are a stable part of the global economy, not an aberration:

Two fields of scholarship, human trafficking for sexual exploitation and that of forced labour in supply chains, rarely intersect.” “In showing that unfree labour is a stable aspect of the contemporary global economy, scholars studying labour in supply chains challenge the idea that forced labour is an ‘aberration’ (Phillips and Mieres, 2014: 245) from the normal functioning of labour markets and that of unfree labour as the opposite of free waged labour.

That is, deregulation, liberalization and privatization not only introduce failure regimes–less stable by definition–where there were none before (e.g., once public entities can now go bankrupt, thereby undermining the welfare state), but also provide more stable markets for all manner of unofficial and illegal items.

“Welfare,” “state” and the “welfare state” have indeed been transformed. Its official non-growing economy is not the only economy of relevance anymore.

So it has always been, though arguably not as marketized globally as now. So what?

The observation would be banal were it not for its major policy implication. If asked, What makes for better planning ahead?, we should answer: Why even ask if you can’t learn better to plan and respond for now and the next steps? Or to put the point from the opposite direction: It’s not insignificant that most miracles are about healing finally.


Sources.

https://www.sciencedirect.com/science/article/pii/S0921800921001245

https://journals.sagepub.com/doi/10.1177/13505068211020791

https://onlinelibrary.wiley.com/doi/full/10.1111/puar.13388

A brief, but major, methodological reminder about prediction

The reason for what the modelers dub ‘no prediction’ is given in another paper attempting to predict the habitat of the Tsetse fly, which emphasizes that all environmental conditions will not have been captured by the model and that when the environment is too different, they prefer creating a category of ‘no prediction’:

“Mapped outputs record the similarity of each pixel in an entire set of satellite images to the satellite-determined environmental characteristics of the training set sites. Obviously for this to be successful the training set should have captured the entire range of conditions present throughout the area for which predictions will eventually be made. This is not always the case, and it is then preferable to identify in the output image a separate category of ‘no prediction’ for those areas where the environmental conditions are some specific minimum distance (in multivariate space) away from any of the training set clusters (Rogers and Robinson, Citation2004, p. 144. Emphasis added.).”

Accessed online on August 26 2024 at: https://www.tandfonline.com/doi/full/10.1080/09505431.2023.2291046#d1e910

“So much is uncertain that anything is possible, which means everything is at risk.” Huh?

I

Many people with whom I’ve worked and interacted insist that, when it comes to ecosystems and the environment, more things can go straight-out, hair-raisingly wrong than right.

It is easier to mismanage an ecosystem than it is to manage it. Ecosystem collapse is more certain than ecosystem sustainability. Negative externalities in the environment are to be expected, positive ones not. Probabilities of large system failure and cascades are primed to flip to 1.0 in no time flat.

We must manage its resources better, but no one should expect technology to help. Economic growth is never a sufficient condition for improving the environment, while economic growth’s impacts on the environment are always sufficient for precaution. So much is uncertain that anything is possible, which means everything is at risk.

Huh?

II

Let’s agree that this expectation of next-is-worse follows from the core of today’s modernity—international capital, fossil fuel, global urbanization, the Enlightenment project—while in the same breadth, however, insisting all this is best understood in the very terms of that modernity: Anything and everything is at risk; all risks are potentially scary; indeed, any could be catastrophic.

That said, people with these expectations are like those trying to predict a poet’s next poem from their current body of work. A more productive approach might be to ask: What are we getting from this habituation to next-is-worse?

One answer: Doing so saves us all the trouble and worry of having to figure out the details. Another reason is the trained incapacity that comes with fatalism. Repeatedly, the doomer literature nails home that we don’t need widespread fear and dread of COLLAPSE to provoke remedy and recovery, because so many no longer believe in either.

How are the rest of us to respond to these expectations? What can we believe and be doing instead?

III

COP26, the 26th United Nations Climate Change Conference, was for many (me included) a failure to do the needful in limiting temperature rise. But the failure was not that “alternative voices were left out and alternative politics side-lined.” For it first has to be asked: Which COP26 failed?

Such a conference is never altogether in one place and time only, if only because those attending were being themselves in one venue while being other selves in other venues there. COP26 is and was riddled with this intermittence and who’s to say the earlier or later versions around and in between October 31 and November 13 2021 are not its upside? Next-is-worse is just one venue. What about the other venues, where the networking and horizontal relations were underway?

Here, the opposite of fatalism is intermittence. No state of affairs is one-way only.

More on policy palimpsests (updated)

I

The philosopher, Ludwig Wittgenstein, writes in The Big Transcript:

In a story it says: “After he said that he left her, as he had done the day before.” If I am asked whether I understand this sentence, there’s no easy answer. It’s an English sentence and in that respect I understand it. I would know, for instance, how one could use this sentence. I could come up with a context of my own for it. And yet I don’t understand it in the same way I would understand it if I had read the story up to that point. (Cf language-games.) [7e]

Replace “if I had read the story” with “if I had read the policy palimpsest.” The spaces in between the words, “After he said that he left her, as he had done the day before,” are just as important, if not more so, than the words read. The spaces signify all that has been left out, effaced or erased from prior texts used to assemble this composite sentence. Not to see what’s missing is a special kind of failure in understanding what you are reading. That you’d understand the sentence differently or better had you read the story up to that point doesn’t change the fact that the preceding text embeds interstices also to be examined and understood.

How so? Immediately after the above quote, Wittgenstein asks us to think of the sentence as if it were a painting:

What does it mean to understand a painted picture? Here too there is understanding and a failure to understand! And here too ‘understanding’ and ‘failure to understand’ can mean different things. –The picture represents an arrangement of objects in space, but I am incapable of seeing a part of the picture three-dimensionally; rather, in that part I see only patches of the picture. . .[M]aybe I know all of the objects, but – in another sense – don’t understand how they’re arranged. [7e]

So too we understand the words in a composite sentence but fail to understand the three-dimensionality of the palimpsest–its weight and heft layered and interweaved beneath–from which the composite has been patched together.

II

In actuality, each composite sentence is a rearrangement of the palimpsest’s elements-with-effacements from different layers and positions into, literally, the straight lines called sentences. These linear, sequential expressions are, in effect, meshes of interrupted time and space tethered in multiple places to the entire policy palimpsest.

The analogy I have in mind is the way painter, Gerhard Richter, uses the squeegee in his photo-paintings and more recent chromatic work. He smears the surface photo or layers of paint and produces something new, seen for the first time. So too the analyst seeks to smear the composite sentence or argument with an optic–a new method, metaphor, counternarrative–in order to see not just what’s below the surface but also to surface new ways of seeing all this.

III

No palimpsest is inscripted with the last word; no composite from it is indisputable. Each composite is allographic in the sense of having no one authoritative rendering. If a “readymade” is a mass-produced object elected by an artist for display as a work of art, a policy palimpsest is a “readyunmade,” one that is also mass-produced but constantly scored over by all manner of contingencies.

In case it needs saying, sometimes the scoring is visible and the sutures blatant. During his honorary degree ceremony in 1959 at the Czech Charles University, Haile Selassie was addressed as, well, “Comrade Emperor” (which is right up there with Louis-Philippe as “citizen King” in France’s July 1830 Revolution).


For an introduction to “policy palimpsests,” see my “When Complex is as Simple as it Gets: Guide for recasting policy and management in the Anthropocene” at https://opendocs.ids.ac.uk/opendocs/handle/20.500.12413/18008

Und?

While green industrial policies are helping to increase the availability of such [renewable] fuels and bring down their cost, there are few policies in place to deal explicitly with the much thornier question of winding down the fossil fuel economy: grappling with ongoing, uneven declines and changes in a predominately carbon-based energy system; distributing the burdens of those shifts and of decarbonization equitably; and, perhaps most important of all, ensuring that absolute production declines do actually happen at adequate scale and speed.

https://rooseveltinstitute.org/publications/green-industrial-policys-unfinished-business/

A National Academy of Reliable Infrastructure Management

We need no further reminder than the 2021 Texas grid collapse and shutdown of the Colonial pipeline for how essential the real-time management of infrastructures is to people’s lives and livelihoods. Yet that management was the missing middle of President Biden’s mega-plan for new infrastructure construction and renovations. Nor is real-time management central to other initiatives like a National Infrastructure Bank, proposed in 2007 and resurfaced in the policy mix through 2020 legislation.

Providing the missing management means banking expertise the nation already has to operate these large, changing systems. They range from energy grids, urban water supplies and flood protection to telecommunications, vessel transportation and aviation, along with others. If anything, the legislation and initiatives will increase the need for real-time professionals to correct for inevitable shortfalls that jeopardize systemwide reliability and safety.

A National Academy of Reliable Infrastructure Management would remediate the nation’s infrastructure crisis by enhancing and advancing that high reliability management. The management challenges are beyond the domains of engineering, economics and systems modeling, but of equal priority and urgency as those of the National Academies of Science, Engineering and Medicine.

RELIABILITY PROFESSIONALS

High reliability management is understood in real-time infrastructure operations as the continuous and safe provision of what are considered to be critical services, even during (especially during) turbulent and changing times.

The Academy would bring together an under-recognized class of experts in that real-time management from around the world. These reliability professionals include, most important, infrastructure control room operators (often with long experience and variable formal education), along with their managers and their immediate expert support staff (more likely to have higher formal degrees).

The Academy would promote their participation through projects, studies, and other advisory and convening activities. The mission is to examine, assemble and advance evidence-based findings for real-time reliability and safety management of infrastructures under 21st century conditions. In doing so, the Academy would provide the heft in facilitating research access to major control centers whose entry has been restricted for proprietary or security reasons.

Why an Academy for infrastructure management? As demonstrated repeatedly, large critical infrastructures must be managed beyond their technology, formal designs, and published regulations. The Academy challenge is to ensure that the tasks and demands of the rapidly changing infrastructure technologies are matched to the people with the skills and expertise to manage them beyond inevitable glitches and contingencies.

As the Academy gained knowledge, it would foster the management expertise to better navigate the interdependencies and interconnections of critical infrastructure sectors. Doing so requires two tracks. Not only is the Academy’s attention on ensuring critical national services like water, electricity and natural gas, hazardous liquids transmission, and aviation, are provided when most needed, always right now without incident. It also means focusing on ensuring their reliable and safe interconnectivity: Natural gas is used for electricity provision, which supplies the water needed by refineries that process the hazardous liquids, including Jet A-1 fuel for aviation.

VALUE-ADDED FOR INTERCONNECTED CRITICAL INFRASTRUCTURES.

The challenge continues to be how to analyze and improve the interconnectivity as it is navigated in real time. No one is responsible for that high reliability management picture. An example illustrates the huge stakes in getting this right. Assume an explosion at a major natural gas reservoir has occurred. Presently, the disaster leads to root-cause analyses, a process of zooming down to determine why and what precipitated the explosion. This is the responsibility of staff in the infrastructure and its regulator of record.

Identifying causes of the explosion is obviously important to prevent further explosions from happening at this and other reservoirs. But knowing causes does not go far enough in making sure that other systems are managed reliably and safely in light of the disaster. Required at the same time is zooming up the system and across systems with which it is interconnected.

What happened to the real-time operations of the natural gas transmission as a whole during and after the explosion? What happened to infrastructures depending on natural gas for their own operations during the explosion and in their next steps ahead? To my knowledge, the regulators of record do not work together to answer the latter question, routinely or as a matter of priority.

Such questions would be of core concern to the new Academy. Was the control center for natural gas transmission able to compensate for loss of the reservoir in real time? Did the control room keep the crisis from spreading to other parts of its transmission and distribution systems, including the variety of end-use customers? How did the control room compensate, where did it stumble, and what are other parts of its system were vulnerable or not?

But more than zooming up through the system is required. In the same instant, we must know what happened because of the explosion to the critical infrastructures depending on its natural gas. Some may also have control centers: Were their operators able to maintain their respective system’s reliability and safety in the face of that explosion? Since natural gas is often interoperable with electricity, it is critical to determine if or to what extent the electricity infrastructure was affected by explosion.

These assessments are also necessary to keep infrastructures interconnected under complex and changing conditions. It’s safe to say that zooming down for a root-cause analysis has been far more common than zooming up and across. But only the former assessments highlight major vulnerabilities introduced when root-cause analyses are the basis for systemwide recommendations to ensure the disasters don’t happen again.

What is missing in root-cause analyses are the negative impacts, if any, of the recommended changes on high reliability management at the system and inter-system levels. Will the changes, when implemented, undermine the capacity of the infrastructure’s control room to prevent disruptions, such as explosions, that it had prevented in the past from cascading across the natural gas system or beyond?

No regulator of record or national body is tasked to answer that question about cascade potential and those entailed with it. That there are answers would be the purview of the new Academy of Reliable Infrastructure Management.

REMODELING INFRASTRUCTURE CASCADES

Infrastructure cascades are understandably of central concern, where failure in one system leads to failure in others. But system engineers and modelers often have a very different view about these than control room operators.

One objective, for example, of network of networks modeling of infrastructures has been identifying which nodes and connections, when deleted, bring the network or sets of networks most immediately to collapse. But not failing immediately is what we expect to find in managed systems. In fact, the datasets we have on really existing infrastructure disruptions show that most are managed so as not to cascade over into other infrastructures and that certain infrastructures, most notably in energy, have a greater potential for cascading.

Modelers defend their focus as one of identifying worst-case scenarios (e.g., in today’s highly charged cyber-security arena). But control room operators and staff live in a real-time world where “what-if” scenarios cannot be the only way to treat probabilities and consequences.

Real-time reliability of their systems as systems must also account for the run of cases and frequencies of past or like events and their precursors. Real-time operators wouldn’t be reliability professionals if they ignored that, in their systems, brownouts at time precede blackouts, some levees are seen to seep long before failing, and the electric grid’s real-time indicators of precursors to disruption or failure typically increase beforehand. Reliability professionals (not least of whom in major control centers that face thousands and thousands of daily cyber-attacks) have to be skilled in both systemwide pattern recognition and in localized “what-if” scenario formulation.

Their expertise also reflects its own real-time indicators of effectiveness. These indicators are rarely if ever recognized by the regulators of record or system models of interconnectivity. The Academy would be the nation’s advocate for that expertise and early warning signals.

NEW INDICATORS FOR PREVENTING INFRASTRUCTURE COLLAPSE, NOW

It’s important to establish from the outset that the Academy would be advancing leading (not lagging) indicators of systemwide collapse. Just as important, the indicators already exist for monitoring critical infrastructures operating at, or beyond, their performance edges, e.g.:

  • The infrastructure’s control room is in prolonged just-for-now performance, which means operators find it more difficult to maneuver out of a corner in which they find themselves. (“Just keep that generator online now!” even though the generator is scheduled for outage maintenance).
  • Real-time control operators are pushed into working increasingly outside their established bandwidths for operations, in effect having to work outside upper and lower bounds of competent performance.
  • Control room operators find that a chokepoint in its infrastructure (a major bottleneck that cannot be worked around) is failing adjacent to the chokepoint of another infrastructure with which it is functionally interconnected.
  • The decision rules operators reliably followed before are now reversed: “Prove we can launch” becomes “Prove we can’t launch” (Challenger Accident); “Ensure a capital cushion to protect against unexpected losses” becomes “From now on, manage for expected losses” (2008 financial crisis).
  • Measurable real-time operational redesigns (workarounds) are no longer effective. Nor can systemwide patterns be recognized or what-if scenarios formulated with the level of granularity as in the past.
  • Instead of being driven by wide social dread of having a next major failure ahead, control room professionals are told their track record up to now is the benchmark for reliability.

No one has the institutional niche and wherewithal to direct and sustain the nation’s attention on measuring and monitoring these real-time tipping points and transitions. The Academy would find it an easier task to cut through all the noise, including typical objections about control rooms and their operators, so as to augment, update and prioritize the indicators list.

POSSIBLE OBJECTIONS

In my judgment, the principal objections to an Academy would not be its cost or clout, both of which would be very real. Rather, the real objections originate in complaints from other disciplines in infrastructure development: “Control room operators aren’t really experts, like the engineers and economists with whom they work” and “Control rooms aren’t innovative; in fact, they’re the opposite.” (The latter misconception is addressed in the next section.)

Major cultural differences have plagued engineers and control room operators and, more recently, “Ops (Operations)” and “IT (Information Technology)” staff. One engineer we interviewed called the control room, “neanderthals.” Economists and engineers assured us: Generally speaking, having to operate in unstudied conditions is a “risk” society must take in order to benefit from major technological advances.

Yet, control room operators continue to press for the specifics—What if this piece of new marketing software fails during the phasing out of those backups?—something we heard again and again, as one “go-live” date had to give way to another in an executive initiative to replace legacy systems in a major state control room. 

There are, of course, exceptions to such behavior. But no one reading should doubt the outsized importance of engineers, economists and system modelers relative to real-time system operators and wraparound support, at the center and in the field, when it comes to major infrastructure change and reform here.

The professional orientation of control operators to prevent systemwide failure is clearly orthogonal to disciplines and professions insisting it’s all but impossible to innovate if you’re not prepared to fail.

Equally telling, calls for new technologies and software to correct for “operator error” are routinely made (1) in the absence of calculations by economists of the everyday savings of disasters averted and (2) in spite of a system model focus on two states of operation, normal and failed, when it is during the intervening state of temporary service disruption that operators demonstrate their skills and use of indicators in restoring service back. These and other differences in professional orientations would be treated far more constructively by a free-standing Academy.

CONTROL ROOMS AS CENTERS OF INNOVATION

Control operators, to the extent they are acknowledged for their expertise, have been disparaged as hidebound with a “don’t fix what’s already working” mentality. The reality is that because things are not working in real time, control operators must innovate so as to maintain system reliability and safety then.

Three domains of control room innovation are core to the Academy’s mission:

  1. Control rooms as unique centers of systemwide innovation and evolution

It is not sufficiently understood by engineers, economists and system modelers that infrastructure control rooms are an historically unique organizational formation. (Here as elsewhere, I thank my research colleague, Paul Schulman, for the insight.) They have evolved over time to take hard systemwide decisions under difficult societal conditions that require a decision, now.

In fact, the evolutionary advantage of control rooms lies in the skills, expertise and team situation awareness of its operators to redesign in real time what prove to be incomplete or otherwise defective technology, design and regulation. More, meeting the high reliability mandate must be done so as not to threaten the limits of the system to operate as a whole. There are no guarantees here, but the expertise is required when “fool-proof” technology and designs are found, too frequently, to be otherwise.

The Academy would treat these specifically organizational and management practices, skills and core competencies with the priority and resources the nation deserves.

2. Importance of the reliability-matters test for other major technological innovations

It’s indisputable that innovations for infrastructures proposed by outside experts and consultants are required. To ensure viability, they must pass the reliability-matters test. Would the innovation, if implemented, reduce the task volatility that real-time operators face? Does it increase their options to respond to task volatility? Does it increase their maneuverability in responding to different, often unpredictable or uncontrollable, performance conditions?

Among many control room operators interviewed, I never met one who was against any innovation that increased options, reduced task volatility and/or increase performance maneuverability across changing conditions. I have, however, met economists, engineers and others who dismiss this reliability-matters test, as they also dismiss “only workarounds,” as proof of a control room’s “resistance to change.”

The Academy will not be able to stop the premature introduction of novel software and hardware into systemwide operations, but it can monitor their real-time management impacts and interconnected knock-on effects (as in the natural gas example and indicators list).

3. Control operators and support staff as innovators in systemwide risk assessment

Talk of “trade-offs” is ubiquitous when discussing new designs and technologies. Control operators and wraparound support see the real-time demands of their high reliability mandate along different lines. 

For them—as for the infrastructure-reliant public—reliability in real time becomes “non-fungible.” That is, high reliability can’t be traded off against cost or efficiency or whatever when the safe and continuous provision of the critical service matters: again, right now, without failure. No number of economists, engineers and system modelers insisting that reliability is “actually” a probability estimate of meeting a standard will change the real-time mandate that systemwide disasters must be prevented from ever happening.

Nuclear reactors must not blow up, urban water supplies must not be contaminated by cryptosporidium or worse, electric grids must not island, jumbo jets must not drop from the sky, irreplaceable dams must not breach or overtop, and autonomous underwater vessels must not hazard the very oil rigs they are repairing. That disasters can or do happen reinforces the dread and commitment of the public and control operators to this precluded-event standard.

The better practices for high reliability management developed and modified across runs of different cases and infrastructures would be the Academy’s principal subject. The Academy’s ambit would be worldwide in this regard and well beyond published best practices of professional societies and industry associations only.

Infrastructure mandates for managing and innovating reliably and safely are, in short, not going away. Nor can they, even when systems are necessarily smaller, more decentralized, less interconnected, and more sustainable. Those systems too will be managed as if peoples’ lives and livelihoods depend on it—because they do.

Principal sources

High Reliability Management and Reliability and Risk (2008 and 2016 respectively from Stanford University Press and co-authored with Paul R. Schulman). A summary can be found in E. Roe and P. Schulman (2018). “A Reliability & Risk Framework for the Assessment and Management of System Risks in Critical Infrastructures with Central Control Rooms,” Safety Science 110 (Part C): 80-88

For a shorter version of this blog, see “A National Academy of Reliable Infrastructure Management.” Issues in Science and Technology (August 3, 2021), accessed online at https://issues.org/national-academy-reliable-infrastructure-management-roe/

What does “anti-capitalist” actually mean these days?

Ending capitalism isn’t just hard to realize; it’s hard to theorize and operationalize. To wit: “Under capitalism” means that even with always-late capitalism, we have. . .

laissez-faire capitalism, monopoly capitalism, oligarchic capitalism, state-guided capitalism, party-state capitalism, corporate capitalism, corporate-consumerist capitalism, bourgeois capitalism, patrimonial capitalism, digital capitalism (and now varieties of digital capitalism), financialized capitalism, political capitalism, social (democratic) capitalism, neoliberal capitalism, crony capitalism, wellness capitalism, petty capitalism, platform capitalism, surveillance capitalism, infrastructural capitalism, welfare capitalism, authoritarian capitalism, imperialistic capitalism, turbo-capitalism, post-IP capitalism, green (also red and brown) capitalism, climate capitalism, extractive capitalism, libidinal capitalism, clickbait capitalism, emotional (affective) capitalism, tech capitalism,American capitalism, British capitalism, European capitalism, Western capitalism, transnational capitalism, global capitalism, agrarian capitalism, philanthrocapitalism, residential capitalism, disaster capitalism, rentier capitalism, industrial capitalism, post-industrial capitalism, fossil capitalism, petro-capitalism, settler-colonial capitalism, supply chain capitalism, cognitive capitalism, asset manager capitalism, information (also data) capitalism, cyber-capitalism, racial capitalism, necro-capitalism, bio-capitalism, penny capitalism, war capitalism, crisis capitalism, managerial capitalism, stakeholder capitalism, techno(scientific)-capitalism, pandemic capitalism, caring capitalism, zombie capitalism. . .

Oh hell, let’s stop there. In a deep irony, much of this looks like classic product differentiation in competitive markets. In this case: by careerists seeking to (re)brand their lines of inquiry for a competitive advantage in professions that act more and more like markets anyway.

Now, of course, it’s methodologically positive to be able to differentiate types and varieties of capitalism, so as to identify patterns and practices (if any) across the diversity of cases. But how is the latter identification to be achieved with respect to a list, namely the above, without number?

That is, some of the currently listed terms do seek to denote specific contexts and levels of granularity and commonalities across cases. But, as others do not, what then does being anti-“capitalist” actually mean?

Epilogue

In practice, actual systems we will want to call capitalist have been fantastically variable over space and time with respect to all of these, including great variations in the degree of legal formality. The variations produce such great differences in political/economic outcomes that calling them all “capitalist” leaves the term without much more than the minimal (though important) abstract meaning we started with. The second problem is that the abstract regime idea contains no theory of limiting factors. It is obvious that in practice the profit motive and marketization are major elements, but only elements, not the whole, of a larger socio-economic complex that includes a public sphere, the family and a pre-capitalist and an anti-capitalist sector, not to speak ofthe natural environment. The other elements interact with and interpenetrate the “capitalist” part in all sorts of countervailing and reinforcing (dangerously supplemental) ways.

Duncan Kennedy, critical legal studies scholar, 2024 (accessed online at https://duncankennedy.net/wp-content/uploads/2024/06/the-reception-of-jacques-derrida-in-american-critical-legal-studies.pdf)

The interference of advanced telecommunications with major weather and climate forecasting: This is *not* a trade-off!

The wireless industry quickly agreed to 5G standards and started building out infrastructure, requiring more spectrum real estate to support the growing demand. This led to a campaign by the Federal Communications Commission (FCC) to. . .sell bands of frequencies between 24–25 GHz to companies invested in 5G technology.

Terrestrial radio systems emitting 5G signals into this defined spectrum range, so close to the bands allocated for weather sensing (23.8– 24GHz), are a formidable threat to weather forecast and warning services. This is due to the much louder nature of 5G waves compared to those emitted by the atmosphere, and the relatively quiet movements of water vapor molecules that satellites observe. According to federal agencies and meteorologists worldwide, if the 5G signal remains contained between 24–25 GHz, it can coexist alongside existing meteorological operations. However, if it becomes louder, it will bleed over into the weather sensing space, drowning out any noise—invaluable for climate models—emitted by water particles. . . .

A 5G station transmitting at nearly the same frequency as water vapor can be mistaken for actual moisture, leading to confusion and the misinterpretation of weather patterns. This interference is particularly concerning in high-band 5G frequencies, where signals closely overlap with those used for water vapor detection. High-band 5G operates at much higher frequencies, typically in the millimeter-wave range (24GHz to 40GHz), allowing for faster data speeds but also presenting challenges in signal propagation. These high frequencies are absorbed more readily by atmospheric gases, including water vapor, resulting in significant signal attenuation. Consequently, the potential for interference with weather sensing is heightened in high-band G due to the proximity of its frequencies to those critical for water vapor detection.

https://www.e-flux.com/architecture/spatial-computing/604501/unpredictable-atmosphere/

Of course, economists will insist this represents a trade-off. It is no such thing. At least, not as long as society mandates high reliability both in advanced telecommunications and in advanced forecasting of weather and climate.

Three major methodological reasons for recasting emergency management with respect to risk, uncertainty and error

Reason 1. Errors need to be distinguished from an emergency’s risks and uncertainties

Reason 2. Averted losses from disasters avoided are the huge missing middle in emergency calculations

Reason 3. It is the professional’s duty of care that questions dominant methods for emergency risk management


1. Errors need to be distinguished from an emergency’s risks and uncertainties.

That is to ask: Have known errors in emergency response and initial service restoration been corrected before the next emergency?

It seems odd to talk about known errors when uncertainties and risks are massive and widespread in terrorist attacks, earthquakes, river flooding, forest wildfires, and grid failures in electricity and water.

But there can be and often are an urgency, clarity and logic about what to do by way of just-in-time or just-for-now emergency response. What needs to be done is evident to front-line infrastructure staff and emergency management professionals in ways not so for those in incident command centers or higher-level management or official positions. For experienced front-line staff, not doing what needs to be done in these circumstances constitute errors to be avoided in real-time. They are avoidable errors because they can be corrected beforehand.

In particular, research with Paul Schulman on interconnected critical infrastructures found:

–Under conditions of shifting or shifted interconnectivity, it would be an error for infrastructure operators and emergency managers not to establish lateral communications with one another and undertake improvisational and shared restoration activities where needed, even if no official arrangement exists to do so.

–In related fashion, it would be a management error in anticipation and planning not to provide robust and contingent interinfrastructure communication capabilities, including communication connections between the control rooms of interconnected infrastructures. This communication, it has been demonstrated, is also greatly facilitated by establishing lateral interinfrastructure personnel contacts prior to emergencies.

–Further, it would be an error not to have some contingent resources for restoration and recovery activities such as vehicles, portable generators and movable cell towers in differing locations available across infrastructures if needed, particularly where chokepoints of interconnected infrastructures are adjacent to each other.

While these known errors are not the entire set, our interviews and prior research convince us that they are primary because they seriously degrade effective resilience in emergency prevention and responses. Here, errors are not to be managed, more or less like risks, but rather managed categorically as: Yes or no, have they been avoided?

A number of policy and management implications follow. One deserves underscoring here: It may well be some activities presently funded under state and federal “emergency risk management” aren’t as important as having dedicated support and staffing for such error correction, now and ahead. It is long past time to review the risk biases in conventional emergency management.


2. Averted losses from disasters avoided are the huge missing middle in emergency calculations.

I

Last year, I attended a conference on sea-level rise, storm surges and flooding in the greater San Francisco Bay Area, now and projected into the near decades. Among other things, I was told that:

**The Bay Area would need some 477 million cubic yards of sediment–the vast majority of which can’t be sourced locally–so as to restore area wetlands and mudflats;

**Also required would be an estimated US$110 billion to locally adapt to higher sea levels by 2050, this being based on existing plans in place or used as placeholders for entities that have yet to plan; and

**We should expect much more sea-level rise locally because of the newly accelerated melting of the ice cap in Antarctica and Greenland.

Millions of cubic yards equivalent to over 420 Salesforce Tower high-rises? Some $110 billion which has no possibility whatsoever of being funded, locally let alone regionally? And those massive local requirements posed by the melting ice caps? How are these unprecedented high climate-related losses to be compensated for?

It’s not surprising that the individual interventions presented that day and all the hard work they already required paled into insignificance against the funding and work challenges posed by the bulleted challenges.

What to do? How to respond?

II

You respond first and foremost by critically rethinking the direct or underlying estimates of losses (economic, physical, lives, and more) incurred if we don’t take action now. It’s been my experience that none of these estimated losses take into account the losses already prevented from occurring by infrastructure operators and emergency managers who avoid systemwide and regional system failures from that would have happened had they not intervened beforehand, sometimes at the last moment.

Why are these uncalculated billions and billions of saved dollars important when it comes to responding to sea level rise, increased storm surges, more inland flooding, rising groundwater levels and other sequelae?

Because it from this pool of real-time talent and skills and practices that society will be drawing for operationally redesigning the inevitable shortfalls in new technologies, macro-plans and regulations for climate restoration and recovery.


3. It is the professional’s duty of care that questions dominant methods for emergency risk management.

I

We researchers estimated the annual probability of a major stretch of an island’s levees failing ranged between 4% to 24% due to a slope failure. (Slope instability in this scenario would be caused by flooding behind the levee as well as high water levels on its water side.)

Our estimates were considerably higher than the official one, in large part because the research project relied on methodologies validated against benchmark studies.

We presented the findings to the island’s management board. Their first and really only question was whether our estimates would be revealed to the island’s insurers.

II

We undertook a hotwash afterwards to figure out their–how to put it?–underwhelming response:

Didn’t they understand the upper range, 24% per annum, implied a levee breach nigh inevitable with respect to our failure scenario? Or to put the question to our side, in what ways did the 24% per annum estimate fall short of being a failure probability of 1.0?

But if as high as 24% per annum, why hadn’t there been a levee breach over the many decades since the last major one on the island?

And what about the islands nearby? Assuming even a few of these had a similar upper range, why weren’t levee failures happening more often?

The 4% – 24% range was with respect to annual levee failure due to slope instability only. If you add in all the levee failure modes possible (e.g., due to seepage rather than overtopping and flooding), the combined probability of levee failure would have to be higher. (But then again, what are the conditions under which the more ways there are to fail, the more likely failure is?)

You could say one reason why levee failure there hadn’t happened–yet–was because it had been long enough. That is: a long enough period to observe levee breaches so as to form the distribution from which the 24% could be established or corrected empirically. But, methodologically, the burden of proof was on us, the team of levee experts, to explain why the decades and decades of levee use wasn’t “long enough” or what that long-enough might actually look like.

Also, the levee stretch in question could be “failing to fail.” It might be that this stretch had not undergone events that loaded it to capacity or worse. (But then again: How much worse would the conditions have to be in our expert view? Just what is “a probability of failing to fail”?)

To put all this differently, was this levee stretch on that island more diverse and more resilient (say, in the way biodiverse ecosystems are said to be more resilient) than current methods capture but which islanders better understood and perhaps even managed?

III

But the most significant point from the hotwash was the one none of us saw need to voice: How could we accuse the management board and islanders of being short-sighted, with so much else going on challenging us, the team, to make sense of our own estimates for the purposes of island emergency preparedness and management?

After all, we’d be the first to insist that these island levees are themselves a key infrastructure protecting other infrastructures, including river-water supplies, island agriculture and adjacent wetlands. It is our duty of care to follow up on the errors and induced risks associated with current ways in thinking about risk management for and during crises.