Und?

While green industrial policies are helping to increase the availability of such [renewable] fuels and bring down their cost, there are few policies in place to deal explicitly with the much thornier question of winding down the fossil fuel economy: grappling with ongoing, uneven declines and changes in a predominately carbon-based energy system; distributing the burdens of those shifts and of decarbonization equitably; and, perhaps most important of all, ensuring that absolute production declines do actually happen at adequate scale and speed.

https://rooseveltinstitute.org/publications/green-industrial-policys-unfinished-business/

A National Academy of Reliable Infrastructure Management

We need no further reminder than the 2021 Texas grid collapse and shutdown of the Colonial pipeline for how essential the real-time management of infrastructures is to people’s lives and livelihoods. Yet that management was the missing middle of President Biden’s mega-plan for new infrastructure construction and renovations. Nor is real-time management central to other initiatives like a National Infrastructure Bank, proposed in 2007 and resurfaced in the policy mix through 2020 legislation.

Providing the missing management means banking expertise the nation already has to operate these large, changing systems. They range from energy grids, urban water supplies and flood protection to telecommunications, vessel transportation and aviation, along with others. If anything, the legislation and initiatives will increase the need for real-time professionals to correct for inevitable shortfalls that jeopardize systemwide reliability and safety.

A National Academy of Reliable Infrastructure Management would remediate the nation’s infrastructure crisis by enhancing and advancing that high reliability management. The management challenges are beyond the domains of engineering, economics and systems modeling, but of equal priority and urgency as those of the National Academies of Science, Engineering and Medicine.

RELIABILITY PROFESSIONALS

High reliability management is understood in real-time infrastructure operations as the continuous and safe provision of what are considered to be critical services, even during (especially during) turbulent and changing times.

The Academy would bring together an under-recognized class of experts in that real-time management from around the world. These reliability professionals include, most important, infrastructure control room operators (often with long experience and variable formal education), along with their managers and their immediate expert support staff (more likely to have higher formal degrees).

The Academy would promote their participation through projects, studies, and other advisory and convening activities. The mission is to examine, assemble and advance evidence-based findings for real-time reliability and safety management of infrastructures under 21st century conditions. In doing so, the Academy would provide the heft in facilitating research access to major control centers whose entry has been restricted for proprietary or security reasons.

Why an Academy for infrastructure management? As demonstrated repeatedly, large critical infrastructures must be managed beyond their technology, formal designs, and published regulations. The Academy challenge is to ensure that the tasks and demands of the rapidly changing infrastructure technologies are matched to the people with the skills and expertise to manage them beyond inevitable glitches and contingencies.

As the Academy gained knowledge, it would foster the management expertise to better navigate the interdependencies and interconnections of critical infrastructure sectors. Doing so requires two tracks. Not only is the Academy’s attention on ensuring critical national services like water, electricity and natural gas, hazardous liquids transmission, and aviation, are provided when most needed, always right now without incident. It also means focusing on ensuring their reliable and safe interconnectivity: Natural gas is used for electricity provision, which supplies the water needed by refineries that process the hazardous liquids, including Jet A-1 fuel for aviation.

VALUE-ADDED FOR INTERCONNECTED CRITICAL INFRASTRUCTURES.

The challenge continues to be how to analyze and improve the interconnectivity as it is navigated in real time. No one is responsible for that high reliability management picture. An example illustrates the huge stakes in getting this right. Assume an explosion at a major natural gas reservoir has occurred. Presently, the disaster leads to root-cause analyses, a process of zooming down to determine why and what precipitated the explosion. This is the responsibility of staff in the infrastructure and its regulator of record.

Identifying causes of the explosion is obviously important to prevent further explosions from happening at this and other reservoirs. But knowing causes does not go far enough in making sure that other systems are managed reliably and safely in light of the disaster. Required at the same time is zooming up the system and across systems with which it is interconnected.

What happened to the real-time operations of the natural gas transmission as a whole during and after the explosion? What happened to infrastructures depending on natural gas for their own operations during the explosion and in their next steps ahead? To my knowledge, the regulators of record do not work together to answer the latter question, routinely or as a matter of priority.

Such questions would be of core concern to the new Academy. Was the control center for natural gas transmission able to compensate for loss of the reservoir in real time? Did the control room keep the crisis from spreading to other parts of its transmission and distribution systems, including the variety of end-use customers? How did the control room compensate, where did it stumble, and what are other parts of its system were vulnerable or not?

But more than zooming up through the system is required. In the same instant, we must know what happened because of the explosion to the critical infrastructures depending on its natural gas. Some may also have control centers: Were their operators able to maintain their respective system’s reliability and safety in the face of that explosion? Since natural gas is often interoperable with electricity, it is critical to determine if or to what extent the electricity infrastructure was affected by explosion.

These assessments are also necessary to keep infrastructures interconnected under complex and changing conditions. It’s safe to say that zooming down for a root-cause analysis has been far more common than zooming up and across. But only the former assessments highlight major vulnerabilities introduced when root-cause analyses are the basis for systemwide recommendations to ensure the disasters don’t happen again.

What is missing in root-cause analyses are the negative impacts, if any, of the recommended changes on high reliability management at the system and inter-system levels. Will the changes, when implemented, undermine the capacity of the infrastructure’s control room to prevent disruptions, such as explosions, that it had prevented in the past from cascading across the natural gas system or beyond?

No regulator of record or national body is tasked to answer that question about cascade potential and those entailed with it. That there are answers would be the purview of the new Academy of Reliable Infrastructure Management.

REMODELING INFRASTRUCTURE CASCADES

Infrastructure cascades are understandably of central concern, where failure in one system leads to failure in others. But system engineers and modelers often have a very different view about these than control room operators.

One objective, for example, of network of networks modeling of infrastructures has been identifying which nodes and connections, when deleted, bring the network or sets of networks most immediately to collapse. But not failing immediately is what we expect to find in managed systems. In fact, the datasets we have on really existing infrastructure disruptions show that most are managed so as not to cascade over into other infrastructures and that certain infrastructures, most notably in energy, have a greater potential for cascading.

Modelers defend their focus as one of identifying worst-case scenarios (e.g., in today’s highly charged cyber-security arena). But control room operators and staff live in a real-time world where “what-if” scenarios cannot be the only way to treat probabilities and consequences.

Real-time reliability of their systems as systems must also account for the run of cases and frequencies of past or like events and their precursors. Real-time operators wouldn’t be reliability professionals if they ignored that, in their systems, brownouts at time precede blackouts, some levees are seen to seep long before failing, and the electric grid’s real-time indicators of precursors to disruption or failure typically increase beforehand. Reliability professionals (not least of whom in major control centers that face thousands and thousands of daily cyber-attacks) have to be skilled in both systemwide pattern recognition and in localized “what-if” scenario formulation.

Their expertise also reflects its own real-time indicators of effectiveness. These indicators are rarely if ever recognized by the regulators of record or system models of interconnectivity. The Academy would be the nation’s advocate for that expertise and early warning signals.

NEW INDICATORS FOR PREVENTING INFRASTRUCTURE COLLAPSE, NOW

It’s important to establish from the outset that the Academy would be advancing leading (not lagging) indicators of systemwide collapse. Just as important, the indicators already exist for monitoring critical infrastructures operating at, or beyond, their performance edges, e.g.:

  • The infrastructure’s control room is in prolonged just-for-now performance, which means operators find it more difficult to maneuver out of a corner in which they find themselves. (“Just keep that generator online now!” even though the generator is scheduled for outage maintenance).
  • Real-time control operators are pushed into working increasingly outside their established bandwidths for operations, in effect having to work outside upper and lower bounds of competent performance.
  • Control room operators find that a chokepoint in its infrastructure (a major bottleneck that cannot be worked around) is failing adjacent to the chokepoint of another infrastructure with which it is functionally interconnected.
  • The decision rules operators reliably followed before are now reversed: “Prove we can launch” becomes “Prove we can’t launch” (Challenger Accident); “Ensure a capital cushion to protect against unexpected losses” becomes “From now on, manage for expected losses” (2008 financial crisis).
  • Measurable real-time operational redesigns (workarounds) are no longer effective. Nor can systemwide patterns be recognized or what-if scenarios formulated with the level of granularity as in the past.
  • Instead of being driven by wide social dread of having a next major failure ahead, control room professionals are told their track record up to now is the benchmark for reliability.

No one has the institutional niche and wherewithal to direct and sustain the nation’s attention on measuring and monitoring these real-time tipping points and transitions. The Academy would find it an easier task to cut through all the noise, including typical objections about control rooms and their operators, so as to augment, update and prioritize the indicators list.

POSSIBLE OBJECTIONS

In my judgment, the principal objections to an Academy would not be its cost or clout, both of which would be very real. Rather, the real objections originate in complaints from other disciplines in infrastructure development: “Control room operators aren’t really experts, like the engineers and economists with whom they work” and “Control rooms aren’t innovative; in fact, they’re the opposite.” (The latter misconception is addressed in the next section.)

Major cultural differences have plagued engineers and control room operators and, more recently, “Ops (Operations)” and “IT (Information Technology)” staff. One engineer we interviewed called the control room, “neanderthals.” Economists and engineers assured us: Generally speaking, having to operate in unstudied conditions is a “risk” society must take in order to benefit from major technological advances.

Yet, control room operators continue to press for the specifics—What if this piece of new marketing software fails during the phasing out of those backups?—something we heard again and again, as one “go-live” date had to give way to another in an executive initiative to replace legacy systems in a major state control room. 

There are, of course, exceptions to such behavior. But no one reading should doubt the outsized importance of engineers, economists and system modelers relative to real-time system operators and wraparound support, at the center and in the field, when it comes to major infrastructure change and reform here.

The professional orientation of control operators to prevent systemwide failure is clearly orthogonal to disciplines and professions insisting it’s all but impossible to innovate if you’re not prepared to fail.

Equally telling, calls for new technologies and software to correct for “operator error” are routinely made (1) in the absence of calculations by economists of the everyday savings of disasters averted and (2) in spite of a system model focus on two states of operation, normal and failed, when it is during the intervening state of temporary service disruption that operators demonstrate their skills and use of indicators in restoring service back. These and other differences in professional orientations would be treated far more constructively by a free-standing Academy.

CONTROL ROOMS AS CENTERS OF INNOVATION

Control operators, to the extent they are acknowledged for their expertise, have been disparaged as hidebound with a “don’t fix what’s already working” mentality. The reality is that because things are not working in real time, control operators must innovate so as to maintain system reliability and safety then.

Three domains of control room innovation are core to the Academy’s mission:

  1. Control rooms as unique centers of systemwide innovation and evolution

It is not sufficiently understood by engineers, economists and system modelers that infrastructure control rooms are an historically unique organizational formation. (Here as elsewhere, I thank my research colleague, Paul Schulman, for the insight.) They have evolved over time to take hard systemwide decisions under difficult societal conditions that require a decision, now.

In fact, the evolutionary advantage of control rooms lies in the skills, expertise and team situation awareness of its operators to redesign in real time what prove to be incomplete or otherwise defective technology, design and regulation. More, meeting the high reliability mandate must be done so as not to threaten the limits of the system to operate as a whole. There are no guarantees here, but the expertise is required when “fool-proof” technology and designs are found, too frequently, to be otherwise.

The Academy would treat these specifically organizational and management practices, skills and core competencies with the priority and resources the nation deserves.

2. Importance of the reliability-matters test for other major technological innovations

It’s indisputable that innovations for infrastructures proposed by outside experts and consultants are required. To ensure viability, they must pass the reliability-matters test. Would the innovation, if implemented, reduce the task volatility that real-time operators face? Does it increase their options to respond to task volatility? Does it increase their maneuverability in responding to different, often unpredictable or uncontrollable, performance conditions?

Among many control room operators interviewed, I never met one who was against any innovation that increased options, reduced task volatility and/or increase performance maneuverability across changing conditions. I have, however, met economists, engineers and others who dismiss this reliability-matters test, as they also dismiss “only workarounds,” as proof of a control room’s “resistance to change.”

The Academy will not be able to stop the premature introduction of novel software and hardware into systemwide operations, but it can monitor their real-time management impacts and interconnected knock-on effects (as in the natural gas example and indicators list).

3. Control operators and support staff as innovators in systemwide risk assessment

Talk of “trade-offs” is ubiquitous when discussing new designs and technologies. Control operators and wraparound support see the real-time demands of their high reliability mandate along different lines. 

For them—as for the infrastructure-reliant public—reliability in real time becomes “non-fungible.” That is, high reliability can’t be traded off against cost or efficiency or whatever when the safe and continuous provision of the critical service matters: again, right now, without failure. No number of economists, engineers and system modelers insisting that reliability is “actually” a probability estimate of meeting a standard will change the real-time mandate that systemwide disasters must be prevented from ever happening.

Nuclear reactors must not blow up, urban water supplies must not be contaminated by cryptosporidium or worse, electric grids must not island, jumbo jets must not drop from the sky, irreplaceable dams must not breach or overtop, and autonomous underwater vessels must not hazard the very oil rigs they are repairing. That disasters can or do happen reinforces the dread and commitment of the public and control operators to this precluded-event standard.

The better practices for high reliability management developed and modified across runs of different cases and infrastructures would be the Academy’s principal subject. The Academy’s ambit would be worldwide in this regard and well beyond published best practices of professional societies and industry associations only.

Infrastructure mandates for managing and innovating reliably and safely are, in short, not going away. Nor can they, even when systems are necessarily smaller, more decentralized, less interconnected, and more sustainable. Those systems too will be managed as if peoples’ lives and livelihoods depend on it—because they do.

Principal sources

High Reliability Management and Reliability and Risk (2008 and 2016 respectively from Stanford University Press and co-authored with Paul R. Schulman). A summary can be found in E. Roe and P. Schulman (2018). “A Reliability & Risk Framework for the Assessment and Management of System Risks in Critical Infrastructures with Central Control Rooms,” Safety Science 110 (Part C): 80-88

For a shorter version of this blog, see “A National Academy of Reliable Infrastructure Management.” Issues in Science and Technology (August 3, 2021), accessed online at https://issues.org/national-academy-reliable-infrastructure-management-roe/

What does “anti-capitalist” actually mean these days?

Ending capitalism isn’t just hard to realize; it’s hard to theorize and operationalize. To wit: “Under capitalism” means that even with always-late capitalism, we have. . .

laissez-faire capitalism, monopoly capitalism, oligarchic capitalism, state-guided capitalism, party-state capitalism, corporate capitalism, corporate-consumerist capitalism, bourgeois capitalism, patrimonial capitalism, digital capitalism (and now varieties of digital capitalism), financialized capitalism, political capitalism, social (democratic) capitalism, neoliberal capitalism, crony capitalism, wellness capitalism, petty capitalism, platform capitalism, surveillance capitalism, infrastructural capitalism, welfare capitalism, authoritarian capitalism, imperialistic capitalism, turbo-capitalism, post-IP capitalism, green (also red and brown) capitalism, climate capitalism, extractive capitalism, libidinal capitalism, clickbait capitalism, emotional (affective) capitalism, tech capitalism,American capitalism, British capitalism, European capitalism, Western capitalism, transnational capitalism, global capitalism, agrarian capitalism, philanthrocapitalism, residential capitalism, disaster capitalism, rentier capitalism, industrial capitalism, post-industrial capitalism, fossil capitalism, petro-capitalism, settler-colonial capitalism, supply chain capitalism, cognitive capitalism, asset manager capitalism, information (also data) capitalism, cyber-capitalism, racial capitalism, necro-capitalism, bio-capitalism, penny capitalism, war capitalism, crisis capitalism, managerial capitalism, stakeholder capitalism, techno(scientific)-capitalism, pandemic capitalism, caring capitalism, zombie capitalism. . .

Oh hell, let’s stop there. In a deep irony, much of this looks like classic product differentiation in competitive markets. In this case: by careerists seeking to (re)brand their lines of inquiry for a competitive advantage in professions that act more and more like markets anyway.

Now, of course, it’s methodologically positive to be able to differentiate types and varieties of capitalism, so as to identify patterns and practices (if any) across the diversity of cases. But how is the latter identification to be achieved with respect to a list, namely the above, without number?

That is, some of the currently listed terms do seek to denote specific contexts and levels of granularity and commonalities across cases. But, as others do not, what then does being anti-“capitalist” actually mean?

Epilogue

In practice, actual systems we will want to call capitalist have been fantastically variable over space and time with respect to all of these, including great variations in the degree of legal formality. The variations produce such great differences in political/economic outcomes that calling them all “capitalist” leaves the term without much more than the minimal (though important) abstract meaning we started with. The second problem is that the abstract regime idea contains no theory of limiting factors. It is obvious that in practice the profit motive and marketization are major elements, but only elements, not the whole, of a larger socio-economic complex that includes a public sphere, the family and a pre-capitalist and an anti-capitalist sector, not to speak ofthe natural environment. The other elements interact with and interpenetrate the “capitalist” part in all sorts of countervailing and reinforcing (dangerously supplemental) ways.

Duncan Kennedy, critical legal studies scholar, 2024 (accessed online at https://duncankennedy.net/wp-content/uploads/2024/06/the-reception-of-jacques-derrida-in-american-critical-legal-studies.pdf)

The interference of advanced telecommunications with major weather and climate forecasting: This is *not* a trade-off!

The wireless industry quickly agreed to 5G standards and started building out infrastructure, requiring more spectrum real estate to support the growing demand. This led to a campaign by the Federal Communications Commission (FCC) to. . .sell bands of frequencies between 24–25 GHz to companies invested in 5G technology.

Terrestrial radio systems emitting 5G signals into this defined spectrum range, so close to the bands allocated for weather sensing (23.8– 24GHz), are a formidable threat to weather forecast and warning services. This is due to the much louder nature of 5G waves compared to those emitted by the atmosphere, and the relatively quiet movements of water vapor molecules that satellites observe. According to federal agencies and meteorologists worldwide, if the 5G signal remains contained between 24–25 GHz, it can coexist alongside existing meteorological operations. However, if it becomes louder, it will bleed over into the weather sensing space, drowning out any noise—invaluable for climate models—emitted by water particles. . . .

A 5G station transmitting at nearly the same frequency as water vapor can be mistaken for actual moisture, leading to confusion and the misinterpretation of weather patterns. This interference is particularly concerning in high-band 5G frequencies, where signals closely overlap with those used for water vapor detection. High-band 5G operates at much higher frequencies, typically in the millimeter-wave range (24GHz to 40GHz), allowing for faster data speeds but also presenting challenges in signal propagation. These high frequencies are absorbed more readily by atmospheric gases, including water vapor, resulting in significant signal attenuation. Consequently, the potential for interference with weather sensing is heightened in high-band G due to the proximity of its frequencies to those critical for water vapor detection.

https://www.e-flux.com/architecture/spatial-computing/604501/unpredictable-atmosphere/

Of course, economists will insist this represents a trade-off. It is no such thing. At least, not as long as society mandates high reliability both in advanced telecommunications and in advanced forecasting of weather and climate.

Three major methodological reasons for recasting emergency management with respect to risk, uncertainty and error

Reason 1. Errors need to be distinguished from an emergency’s risks and uncertainties

Reason 2. Averted losses from disasters avoided are the huge missing middle in emergency calculations

Reason 3. It is the professional’s duty of care that questions dominant methods for emergency risk management


1. Errors need to be distinguished from an emergency’s risks and uncertainties.

That is to ask: Have known errors in emergency response and initial service restoration been corrected before the next emergency?

It seems odd to talk about known errors when uncertainties and risks are massive and widespread in terrorist attacks, earthquakes, river flooding, forest wildfires, and grid failures in electricity and water.

But there can be and often are an urgency, clarity and logic about what to do by way of just-in-time or just-for-now emergency response. What needs to be done is evident to front-line infrastructure staff and emergency management professionals in ways not so for those in incident command centers or higher-level management or official positions. For experienced front-line staff, not doing what needs to be done in these circumstances constitute errors to be avoided in real-time. They are avoidable errors because they can be corrected beforehand.

In particular, research with Paul Schulman on interconnected critical infrastructures found:

–Under conditions of shifting or shifted interconnectivity, it would be an error for infrastructure operators and emergency managers not to establish lateral communications with one another and undertake improvisational and shared restoration activities where needed, even if no official arrangement exists to do so.

–In related fashion, it would be a management error in anticipation and planning not to provide robust and contingent interinfrastructure communication capabilities, including communication connections between the control rooms of interconnected infrastructures. This communication, it has been demonstrated, is also greatly facilitated by establishing lateral interinfrastructure personnel contacts prior to emergencies.

–Further, it would be an error not to have some contingent resources for restoration and recovery activities such as vehicles, portable generators and movable cell towers in differing locations available across infrastructures if needed, particularly where chokepoints of interconnected infrastructures are adjacent to each other.

While these known errors are not the entire set, our interviews and prior research convince us that they are primary because they seriously degrade effective resilience in emergency prevention and responses. Here, errors are not to be managed, more or less like risks, but rather managed categorically as: Yes or no, have they been avoided?

A number of policy and management implications follow. One deserves underscoring here: It may well be some activities presently funded under state and federal “emergency risk management” aren’t as important as having dedicated support and staffing for such error correction, now and ahead. It is long past time to review the risk biases in conventional emergency management.


2. Averted losses from disasters avoided are the huge missing middle in emergency calculations.

I

Last year, I attended a conference on sea-level rise, storm surges and flooding in the greater San Francisco Bay Area, now and projected into the near decades. Among other things, I was told that:

**The Bay Area would need some 477 million cubic yards of sediment–the vast majority of which can’t be sourced locally–so as to restore area wetlands and mudflats;

**Also required would be an estimated US$110 billion to locally adapt to higher sea levels by 2050, this being based on existing plans in place or used as placeholders for entities that have yet to plan; and

**We should expect much more sea-level rise locally because of the newly accelerated melting of the ice cap in Antarctica and Greenland.

Millions of cubic yards equivalent to over 420 Salesforce Tower high-rises? Some $110 billion which has no possibility whatsoever of being funded, locally let alone regionally? And those massive local requirements posed by the melting ice caps? How are these unprecedented high climate-related losses to be compensated for?

It’s not surprising that the individual interventions presented that day and all the hard work they already required paled into insignificance against the funding and work challenges posed by the bulleted challenges.

What to do? How to respond?

II

You respond first and foremost by critically rethinking the direct or underlying estimates of losses (economic, physical, lives, and more) incurred if we don’t take action now. It’s been my experience that none of these estimated losses take into account the losses already prevented from occurring by infrastructure operators and emergency managers who avoid systemwide and regional system failures from that would have happened had they not intervened beforehand, sometimes at the last moment.

Why are these uncalculated billions and billions of saved dollars important when it comes to responding to sea level rise, increased storm surges, more inland flooding, rising groundwater levels and other sequelae?

Because it from this pool of real-time talent and skills and practices that society will be drawing for operationally redesigning the inevitable shortfalls in new technologies, macro-plans and regulations for climate restoration and recovery.


3. It is the professional’s duty of care that questions dominant methods for emergency risk management.

I

We researchers estimated the annual probability of a major stretch of an island’s levees failing ranged between 4% to 24% due to a slope failure. (Slope instability in this scenario would be caused by flooding behind the levee as well as high water levels on its water side.)

Our estimates were considerably higher than the official one, in large part because the research project relied on methodologies validated against benchmark studies.

We presented the findings to the island’s management board. Their first and really only question was whether our estimates would be revealed to the island’s insurers.

II

We undertook a hotwash afterwards to figure out their–how to put it?–underwhelming response:

Didn’t they understand the upper range, 24% per annum, implied a levee breach nigh inevitable with respect to our failure scenario? Or to put the question to our side, in what ways did the 24% per annum estimate fall short of being a failure probability of 1.0?

But if as high as 24% per annum, why hadn’t there been a levee breach over the many decades since the last major one on the island?

And what about the islands nearby? Assuming even a few of these had a similar upper range, why weren’t levee failures happening more often?

The 4% – 24% range was with respect to annual levee failure due to slope instability only. If you add in all the levee failure modes possible (e.g., due to seepage rather than overtopping and flooding), the combined probability of levee failure would have to be higher. (But then again, what are the conditions under which the more ways there are to fail, the more likely failure is?)

You could say one reason why levee failure there hadn’t happened–yet–was because it had been long enough. That is: a long enough period to observe levee breaches so as to form the distribution from which the 24% could be established or corrected empirically. But, methodologically, the burden of proof was on us, the team of levee experts, to explain why the decades and decades of levee use wasn’t “long enough” or what that long-enough might actually look like.

Also, the levee stretch in question could be “failing to fail.” It might be that this stretch had not undergone events that loaded it to capacity or worse. (But then again: How much worse would the conditions have to be in our expert view? Just what is “a probability of failing to fail”?)

To put all this differently, was this levee stretch on that island more diverse and more resilient (say, in the way biodiverse ecosystems are said to be more resilient) than current methods capture but which islanders better understood and perhaps even managed?

III

But the most significant point from the hotwash was the one none of us saw need to voice: How could we accuse the management board and islanders of being short-sighted, with so much else going on challenging us, the team, to make sense of our own estimates for the purposes of island emergency preparedness and management?

After all, we’d be the first to insist that these island levees are themselves a key infrastructure protecting other infrastructures, including river-water supplies, island agriculture and adjacent wetlands. It is our duty of care to follow up on the errors and induced risks associated with current ways in thinking about risk management for and during crises.

Inflation? “Climate change significantly limits the ability of central banks to control inflation.”

Recent academic literature has shown that climate change has significant implications for inflation. First, climate-related events (such as hurricanes, draughts, heatwaves and floods) can lead to supply-side disruptions that can increase inflationary pressures − the so-called ‘climateflation’ (Schnabel, 2022). Physical supply-side effects of climate change include the reduction in labour and capital productivity, the destruction of capital equipment, the decline in agricultural productivity and the increase in crop output failures (Beirne et al., 2021a; Storm, 2022). Climate-related physical changes can also affect the demand side of the economy, since they can lead to a decline in consumption and investment. For example, households might increase precautionary saving and firms might reduce investment due to uncertainty about profitability (Dafermos et al., 2018).

There is evidence that climate-related events have already affected inflation across the globe. For example, Parker (2018) finds that the impact of weather-related disasters on inflation is significant and persistent in low-income and middle-income counties, but less significant in high-income countries. He also shows that the effects on inflation differ between disaster types, a finding that is consistent with the empirical results of Kabundi et al. (2022). Beirne et al. (2021a) focus on the euro area and show that disasters have, on aggregate, a positive effect on inflation, with heterogeneous results across inflation sub-indices. Using a sample of high-income and medium-income countries, Faccia et al. (2021) find that hot summers tend to increase food price inflation in the short run. However, in the medium term this impact is insignificant or negative. Kunawotor et al. (2022) find a positive impact of weather-related disasters on inflation in African countries. Moreover, higher temperatures tend to increase inflation according to the empirical results of Ciccarelli et al. (2023) and Kotz et al. (2023).

Second, the policies that might be implemented for achieving the transition to a net zero economy (such as carbon pricing and environmental regulation) can lead to increasing costs for firms which might be passed on to prices − the so-called ‘fossilflation’ (Schnabel, 2022). For example, the climate scenarios that have been developed by the Network for Greening the Financial System (NGFS) show that in the case in which carbon prices increase significantly in the coming years the impact on inflation can be substantial (NGFS, 2023). The econometric literature is a bit less conclusive. Moessner (2022) finds that carbon prices have a positive impact on inflation in OECD countries. Santabárbara and Suárez-Varela (2022) show that cap- and-trade systems have increased inflation volatility in OECD countries – the same is not, however, the case for carbon taxes. Konradt and Weber di Mauro (2023) do not find significant effects of carbon taxes on inflation in Europe and Canada. However, all these empirical studies should be treated with caution: carbon prices have so far been relatively low, and inflation might react differently in the case in which carbon prices increase abruptly in the future.

Third, green technologies, such as electric vehicles, solar panels, wind turbines and batteries rely extensively on minerals like copper, lithium and nickel. If the green transition takes place within a short time period, it might be inevitable that some of these minerals will face excess demand and this can lead to inflationary pressures (see also Storm, 2022). Schnabel (2022) has called this possibility ‘greenflation’.

Central banks have limited control over most of these climate-related inflationary sources. For example, an increase in prices caused by a climate-related food supply shock cannot be addressed by increasing interest rates. Or, if governments decide to increase carbon prices as part of their decarbonisation plans, an increase in interest rates can do little to address the fossilflation that this increase can cause. Of course, central banks can affect demand by increasing interest rates. However, this is unlikely to be sufficient to keep inflation under control, unless perhaps interest rates increase substantially, which could have severe side effects, including debt repayment problems and increases in unemployment rates.

But, on top of it, climate change also impairs the so-called transmission channels of monetary policy, including expectations channels, credit channels and asset price channels (NGFS, 2020). For example, climate change can overall make the banking system more financially fragile. This can be the case due to both transition and physical risks (Battiston et al., 2021; Campiglio et al., 2018; Semieniuk et al., 2021). Transition risks capture the impact that an abrupt climate transition might have on the financial position of carbon-intensive companies which can then have spillover effects on the financial system. Physical risks are associated with climate-related economic disasters or financial losses that stem from gradual global warming and climate events, and can lead to an increase in defaults on household and corporate loans or asset price declines.

The fact that the financial system is exposed to these climate-related financial risks implies that in a scenario, for instance, in which central banks reduce interest rates to stimulate credit in a period of low inflation, this might have very negligible effects on credit provision, as banks might be under-capitalised or might find it difficult to identify creditworthy borrowers. The climate exposure of the financial system might also result in asset prices being unresponsive to changes in interest rates.

Overall, these fundamental changes in the determinants of inflation and the transmission channels of monetary policy call into question the ability of central banks to control inflation. Controlling inflation in the future might require a more systematic use of other instruments that are under the control of governments, such as price caps or product market policies that prevent oligopolistic structures.”

Yannis Dafermos (2024), The climate crisis meets the ECB: tinkering around the edges or paradigm shift? SOAS Department of Economics Working Paper No. 264, London: SOAS University of London (accessed online at https://www.soas.ac.uk/sites/default/files/2024-07/economics-wp264.pdf)

Expanding the early warning systems for geoengineering interventions

It’s a commonplace to argue that scientists and experts need to be talking to and engaging much more with the traditional knowledge folks. What’s less often the case, save for the proverbial call for curing cancer, are examples of mutual benefit of doing so. One priority area for reciprocity, I suggest, is that of geoengineering.

Geoengineering is offered up as a last-ditch effort to save the planet in the midst of its very real climate emergency. Even so, one must wonder: What better way to bring the governments of the world to their collective knees than solutions like those that would ballon the skies with mirrors and sulfur dioxide and the seas with chemical changes to capture more carbon, all because the climate emergency has left humanity no choice—no alternative—but to be unreliable on unprecedented scales?

Such indeed is the rationale for having in place robust monitoring and evaluation (M&E) systems of the geoengineering interventions. Now of course, much of the current debate is about the unintended consequences of geoengineering and about the early warning systems for monitoring and evaluating them. But those consequences are almost exclusively dominated by concerns of global North and South experts and scientists.

I suggest that the major priority of governments and the regulators of geoengineering initiatives–and there is no stopping this experimentation!–is to ensure that the early warning systems for droughts and bad weather still in operation among pastoralists and agriculturists of the developing world are also included and canvassed.

The latter are, I believe, a quite specific case where the intersection of measurable and nonmeasuable indicators is of mutual benefit to far more than the presiding scientists and experts in the Global North and South. For my part, I wonder what will be the decrease (or increase for that matter) in the murders of local “rainmakers” (forecasters) because of geoengineering.


On the murder of rainmakers during drought, please see Isao Murahashi (2024), “Climate change or local justice? On frequent drought and regicide in South Sudan.” Presentation given on August 8 2024 as a part of the International Hyflex Sessions, “Living in the Anthropocene, living in uncertainty: Reconfiguring development and humanitarian assistance as ‘care’ with relational approach,” held at IDS Sussex.

Start by distinguishing capitalisms’ losers

Joseph Stiglitz, Nobel economist, confirms: “Only around half of Americans born after 1980 could hope to have earnings higher than their parents (down from 90 percent for the cohort born in 1940).”

But even if true, is the implication that capitalism was better then than now?

In the same vein, pathologies arising from increased financialization have been “blamed on the disappearance of capitalism in its classical form, with the latter now painted in retrospect as a system in which market logics led to productive investment, more-or-less shared growth and functional politics.” But haven’t we always been told capitalism is bad, albeit winners and losers vary across space or time?

In contrast, it is easy to make an empirical case that recent developments in capitalism are not all that new:

In the field of study I’m in — platform labour studies — there is this story that platformisation is tied to the rise of a precariat, that we have these social and labour norms from the postwar era, and that they have been degraded by neoliberalism, of which one feature is platformisation. It’s one part of a broader narrative of the decline of the postwar social contract. For me, what’s missing from this story is that the postwar social contract was only ever a reality for a very small group of workers at a very historically specific moment.

Actually, that postwar social contract had very much been based on exclusions, exploitation and extraction from the majority of the world’s workforce. What we’re seeing now is simply the category of people that have access to that social contract is rapidly shrinking. That’s being read as a new kind of phenomenon when actually, it’s the generalisation of a phenomenon that has existed for a huge number of people.

In other words, many of our parents and grandparents were suffering under capitalism all along just as we are.

But any such conclusion leads to an obvious question: What if the seriatim crises of capitalism are treated as proof-positive not of its death rattle but of its vitality in morphing losers after losers?


Sources

https://ourtime.substack.com/p/seven-theses-on-brenner-and-rileys

https://www.common-wealth.org/centre-for-democratising-work/interview-dalia-gebrial 1/211/21

https://www.google.com/books/edition/The_Road_to_Freedom_Economics_and_the_Go/xWHpEAAAQBAJ?hl=en&gbpv=1&dq=Stiglitz+%22Only+around+half+of+Americans+born+after+1980+could+hope+to+have+earnings+higher+than+their+parents+(down+from+90+percent+for+the+cohort+born+in+1940).%22&pg=PT44&printsec=frontcover

Most popular blog entries (by number of views, 2022 to present)

Recalibrating politics: the Kennedy White House. . . https://mess-and-reliability.blog/2019/12/16/recalibrating-politics-the-kennedy-white-house-dinner-for-andre-malraux-longer-read/

Spread the word: We need more Extreme Climate Resilience Desks for real-time infrastructure operations!. . .https://mess-and-reliability.blog/2024/04/23/spread-the-word-we-need-more-extreme-climate-resilience-desks-for-real-time-infrastructure-operations/

What the Thai BL series, “Bad Buddy,” has to tell us. . .https://mess-and-reliability.blog/2022/02/26/what-the-thai-bl-series-bad-buddy-has-to-tell-us-about-societal-reset-updated-2/

“Apocalyptic” turning into apocrypha?

First itemize a few of apocalyptic predictions that have failed to materialize over the past five decades: global nuclear war, communist world hegemony, global starvation, oil depletion, nuclear winter, a prolonged night/new ice age, and the international meltdown because of the millennium computer bug.

Now itemize—again an arbitrary few—crises we have actually lived through in the last three decades or so: the banking crisis of the early 1990s, the Mexican near-default in early 1996, the Asian financial crisis in 1997, Long Term Capital Management collapse in 1998; the bursting of the dot.com/stock market bubble in 2000, the terrorist attacks of September 11, wars in Iraq and Afghanistan, the breakdown in the Doha round of multilateral trade talks, the 2008 financial crisis and Great Recession, the default of Greece, the resurgence in Western populism and nativism. . .(and don’t forgot the Argentine default of 2001 and the world fisheries collapse and. . .)

And yet, the habitual response: “But, but. . .it still could get worse!” Well, yes, it could. What, though, are we getting from this psychological habituation to it-always-could-get-much-worse? One answer: Doing so saves us all the trouble and worry of having to figure out the details.