Dispersed regulatory functions with respect to society’s critical infrastructures


Here’s my starting point on government regulation (from our 2016 Reliability and Risk):

. . .as long as infrastructure regulation is equated with what regulators do, society will have a very myopic understanding of how regulation functions for critical infrastructures. The regulation of infrastructures is not just what the regulators do; it is also what the infrastructures do in ways that their regulator of record could never do on its own.

Contrary to conventional wisdom, it is not a criticism of regulators to say they never have the same timely information as do those operating the critical infrastructures being regulated. It’s a statement of the obvious cast as a negative. Restate the obvious, but now as a positive: those who have the real-time information must fulfill regulatory functions that the official regulator cannot fulfill. How well they are fulfilling the regulatory functions depends on (1) the skills in real-time risk management of their reliability professionals and (2) where those professionals are located, which for our purposes means the infrastructure control rooms and their respective support units.

From our perspective, it makes little sense for critics to conclude that regulators are failing because formal regulations are not being complied with, if the infrastructures are managing in a highly reliable fashion and would not be doing so if they followed those regulations to the letter.

In this way, the twofold nature of government regulation becomes explicit for the regulator: (1) To what extent does regulation by the regulator increase control operator options and reduce volatility for the critical infrastructure and (2) to what extent is any regulation of that regulator, which inadvertently reduces operational options or increases real-time volatility for the control rooms, corrected by the regulator of record as soon as possible?

In practical terms, this means there is not just the risk of regulatory non-compliance by the infrastructure, there is also the infrastructure’s risk of compliance with defective regulations. Either way, the importance of time from discovery to correction of error reinforces the nature of dispersed regulatory functions: A shorter time to error discovery has the advantage of discovering errors that would have propagated into much larger ones if left uncorrected.


The upshot for the regulator of record?

If, notwithstanding the recognition that regulatory functions are necessarily dispersed, policymakers still insist that the regulator’s task is one of regulating the whole cycle of the infrastructure throughout its operational stages of normal, disrupted, failed, and recovered onwards, then it is better to say that at best the regulator of record is in permanent setback management. At worst, its own activities require the coping behavior we associate with emergency management during crises, i.e., its setbacks are primarily negative.


And yet the demands on government regulation are increasing at the same time. It’s not just the fact that failing to regulate increases pressures for more regulation. It’s also because of infrastructures are interconnected, and so too by implication are their respective regulators of record.

No one should doubt, for example, that the more interconnected the systems to be regulated and the more complex each system and its own regulations are, the more regulatory and inter-regulatory oversight will have to be given to latent interconnections, risks and the transition thresholds where they shift from latent to manifest. Regulating ahead for latent interconnectivities is a very difficult task for even one regulator, let alone for something like “inter-regulatory oversight.” This too reinforces the need for a dispersed regulatory regime well beyond the regulator of record.


It should go without saying that the regulatory functions of the infrastructure’s control room (if any) will differ from the health and safety regulations and approaches needed elsewhere in the critical infrastructure. This means we should not expect there to be a single set of procedural or supervisory approaches that can apply throughout the entire infrastructure, however committed it may be to service reliability.

The challenge instead is to better understand the institutional niche of critical infrastructures, that is, how infrastructures themselves function in allocating, distributing, regulating and stabilizing that reliability and safety apart from, if not independently of, the respective government regulators of record. That this knowledge will always be for regulators partial and punctured by gaps and ignorance should go without saying.

There is, however, a serious asymmetry in the current design orientation for regulating infrastructure reliability (including safety) and the practice orientation of reliability professionals in and around control centers for the infrastructure. When reliability professionals express discomfort over a design orientation, regulators and others insist that this has to be expressed in terms of formal analysis, where the burden of proof is on the reliability professional to show what in this design orientation is not reliable. That burden of proof, we believe, is the responsibility of the regulator; it is not a regulatory function of the infrastructure, when real-time service reliability matters.

Reframing the latest drought in East Africa

Nothing in what follows argues against the latest East Africa drought being of catastrophic proportions in terms of human and livestock deaths and migrations. What I want to do here is contextualize this catastrophe differently in order to show what remains a catastrophe has some different but very important policy and management implications.


Start with the current debates over periodizing World Wars I and II. It’s one thing to adopt the conventional periodization of the latter as 1939 – 1945. It is another thing to read in detail how 1931 – 1953 was a protracted period of conflicts and wars unfolding to and from a central paroxysm in Europe.

In the latter perspective, the December 1941 – September 1945 paroxysm, with the Shoah and the carnage, was short and embedded in a much longer series of large regional wars. These in turn were less preludes to each other than an unfolding process that was indeed worldwide. (Think: Japan’s invasion of Manchuria in 1931, Italy’s invasion of Ethiopia in 1935, the late 1940s Dutch war in Indonesia, the French war in Indochina from the late 1940s through early 1950s, and the Korean War, among regional conflicts across the globe.)


Now think of the latest East Africa drought as one such paroxysm, with drought-related conflicts leading up to and following from it. What follows from such a construction?

Well, one thing that follows is so obvious that it might be missed. This East Africa drought is not a paroxysm for pastoralism systems worldwide, let alone in the world’s arid and semi-arid lands.

Even the current climate emergency would fall short of that role, given so much regional and local variation in climate response to date. Of course, we can imagine a global polycrisis involving drought, climate change and the like, but that scenario would inevitably be but one of other potential polycrisis scenarios out there.

So, let’s return to East Africa as the unit and level of analysis, with the current drought being an indisputable paroxysm.

What follows for policy and management there?

Current emergency management lingo about this or that “longer-term recovery” would be considerably problematized when the longer term is one drought unfolding into another drought and so on. Immediate emergency response would look considerably less immediate when embedded in a process of recurring response always before the next disaster.

To be clear here, it’s an advance in pastoralist development to embed mechanisms for explicit emergency preparedness, immediate response and longer term recovery. But no one should delude themselves into believing that making explicit these stages and identifying interventions won’t highlight the ongoing and unfolding difficulties in recasting these processes in terms that policy or management narratives with beginnings, middles and ends.

What does this mean practically?

What does it mean to frame the current East Africa drought as a paroxysm that extends both the spatial and temporal terms of “recurring drought response and recovery across East Africa”?

Clearly then one major issue is government budgets (in the plural) for their recurrent operations in pastoralist areas. Or more negatively, you’re looking at the recurrent cost crises of East Africa governments–which, to my mind, far too few critics analyze as they seem more fixated on the obvious failures of capital development projects and programs for pastoralists.

My own view (and I stand to be created) is that you have to have recurrent operating budget already in place in order to get recurring drought response and recovery more effective on the ground over time.

Anything else?

One of the major reasons why “recurring drought response and recovery” is better part of government’s recurrent rather than capital budget is because pastoralists give ongoing priority to the real-time prevention of other disasters from happening along the way and the need for their improvisational behavior to do that.

Yes, the government budget for staff operations falls woefully short in helping pastoralists do so, but it is government operations we are talking about, not the log-frame of a project.


Buchanan, A. (2023). Globalizing the Second World War. Past & Present: A Journal of Historical Studies 258: 246-281.

Yes, but


One space spreads through all creatures equally –
inner-world-space. Birds quietly flying go
flying through us.                                                                          Rainer Maria Rilke                                                                                                                                                                                                                         


They spoke to me of people, and of humanity.
But I've never seen people, or humanity.
I've seen various people, astonishingly dissimilar,
Each separated from the next by an unpeopled space.                 Fernando Pessoa

Recasting national policies for pastoralist development

I propose to categorize policies according to their intended goal into a three-fold typology: (i) compensation policies aim to buffer the negative effects of technological change ex-post to cope with the danger of frictional unemployment, (ii) investment policies aim to prepare and upskill workers ex-ante to cope with structural changes at the workplace and to match the skill and task demands of new technologies, [and] (iii) steering policies treat technological change not simply as an exogenous market force and aim to actively steer the pace and direction of technological change by shaping employment, investment, and innovation decisions of firms.

R. Bürgisser (2023), Policy Responses to Technological Change in the Workplace, European
Commission, Seville, JRC130830 (accessed online at https://retobuergisser.com/publication/ecjrc_policy/ECJRC_policy.pdf)

This epigraph focuses specifically on the how to think about policies that better respond to effects of automation on displacing workers.

Please re-read the epigraph and then undertake the following thought experiment.


Imagine it is pastoralists who are being displaced from their usual herding workplaces, in this case by land encroachment, sedentarization, climate change, mining, or other largely exogenous factors.

The question then becomes what are the compensation, investment and steering policies of government, among others, to address this displacement. That is, where are the policies to: (1) compensate herders for loss of productive livelihoods, (2) upskill herders in the face of eventually losing their current employment, and (3) efforts to steer the herding economies and markets in ways that do not lose out if and where new displacement occurs?

The answer? With the odd exception that proves the rule, no such national policies exist.


Yes, yes, of course there are the NGO, donor project, and local department trying to work along these lines. But one has to ask at this point in development history whether their existence is the excuse government uses for avoiding having to undertake such policies, regionally or nationally.


A more productive exercise might be to ask: How would various pro-pastoralist interventions be classified: as compensatory, as investment, and/or as steering?

It seems to me that many of the pro-pastoralist interventions fall under the rubric of “steering policies”. The aim is to keep pastoralists who are already there, there–and better off in some regards. Better veterinary measures, paravets and mobile teachers that travel with the herding households, real-time marketing support, mobile health clinics, restocking programs as and when needed, better water point management and participation, and the like are offered up as ways to improve herding livelihoods in the arid and semi-arid lands.


Fair enough, but clearly not far enough, right?

For where are the corresponding compensation and investment policies?

Where, for example, are the policy interventions for improving and capitalizing on re-entry of remittance-sending members back into pastoralism once they return home? Where are the national policies to compensate farmers for not encroaching further on pastoralist lands, e.g., by increasing investments on the agricultural land they already have? Where are the national (and international) policies that recognize keeping the ecological footprint of pastoralist systems is far less expensive than that of urban and peri-urban infrastructures?


So what?

This missing government policies would function much along the lines government support of various “new green initiatives” are meant to: They seek to derisk dryland development by enlisting the private capital of pastoralists, agro-pastoralists and farmers via adjusting the risk/returns on their private investments in local infrastructure, such as markets and transportation. Obviously this is easier said than done and would have distributional impacts if done.


On the derisking state, see https://osf.io/preprints/socarxiv/hpbj2/

A grammar of policy analysis


Graduate students in public policy analysis and management will have come across an idealized sequence for undertaking a professional policy analysis, e.g., first we define the problem, then assemble the evidence, then analyze it so and so on until we make our recommendation. This idealized sequence, or something like it, is cast in the present tense.

My experience as a practicing analyst is that the idealized sequence of steps is markedly not in the present tense:

Having completed the analysis, I wrote the memo with my recommendations.

The past gerund, “having completed the analysis,” indicates something finished, a hope that stands in marked contrast to real-world policies in their persisting incompletion—also a very different kind of “present tense” than the one in policy schools. The starting gerund also situates more explicitly analysis within an ongoing context without which there wouldn’t be analysis.

The infinitive, “to recommend,” introduces in turn the promise that our memo will be dealt with, albeit beyond our control but within that context of which we analysts are part. Indeed, the point of the past gerund/past tense/infinitive formulation is to make clear that, “objectively speaking”, analysts in the present are not to blame for anything like the real-world incompletion all around us.


So what?

The gap between the two idealized sequences looks a great deal like the gap between the beliefs we say we hold versus the ways we say we practice those beliefs. In neither case need the professed beliefs or practices be the ones we actually hold and undertake.

This double gap between idealized versions themselves and in respect to actual practice is found through policy analysis as I know it. For example, it’s common enough that today’s accounts of policy and management be presented from not just one discipline’s perspective (say, economics), but many—including political science, psychology, organization theory, and more.

Yet what frequently gets missed are the implied hyphens, i.e., “from a socio-politico-economic-cultural-historical-psychological. . .perspective”. How could this matter, you ask?

Consider Polonius in Hamlet: “The best actors in the world, either for tragedy, comedy, history, pastoral, pastoral-comical, historical-pastoral, tragical-historical, tragical-comical-historical-pastoral. . .”

The hyphens there function as the performative demonstration of Polonius’s long-windedness. Interdisciplinary accounts of policy analysis and management insist that you take their added wordage as anything but long-windedness.

Or another example: “It is obviously a highly complex phenomenon that needs global cooperation as a response as well as a holistic approach because the potential collapses are interrelated” Each word is written as if it were terra firma, resolute, placed there to resist being pushed down. In fact, the sentence offers no such prospect. Each word functions as a cowpat to be stepped into and distract us.


Moretti, F. (2013). The Bourgeois: Between History and Literature. Verso: London and New York

Another must-read for policy and management types

Environmental and Natural Resource Economics and Systemic Racism (short report from Resources for the Future)
By Amy W. Ando, Titus O. Awokuse, Nathan W. Chan, Jimena González-Ramírez, Sumeet Gulati, Matthew G. Interis, Sarah Jacobson, Dale T. Manning, and Samuel Stolper


Mercifully well-written and short, this working paper summarizes in one place all that is wrong with environmental and natural resource economics as applied to real people with real problems making real decisions.

“Building in resilience”: really?


The typical divide across pre-disaster/disaster/post-disaster becomes more complicated when you talk to practicing emergency managers. They can go into great deal about efforts to “prepare for,” “mitigate,” and “prevent” situations even when in immediate response and restoration, and not just beforehand.

It would however be a mistake, I think, to see preparation, mitigation and prevention as continuous variables, punctuated from time to time for whatever reason.

To telegraph ahead, what changes over time, more formally and specifically, are different configurations of socio-technical interconnections around which ongoing prevention, preparedness and mitigation efforts are coordinated—from now into and across immediate response and initial restoration of services.


To see how, start this way.

Some infrastructure operators and emergency managers we interviewed say they are best in response and restoration when following plans, while others say they are at their best when surprised by the unexpected. This means operations people may look like cowboys to the engineer department because both cognitively understand the same system differently: “I don’t think you respond to 92 breaks in 13 days without having the ability to adapt on the fly,” said a city’s water distribution manager.

But this may be less a matter of different professional orientations as commonly understood and more about orientations with respect to different “scales of operation,” even within the same city.

For engineers, seismically retrofitting a bridge represents efforts to manage ahead latent interconnectivity so that it does not become manifest during or after an earthquake, e.g., the bridge holds and traffic is not disrupted there. For operations people, even if the seismically retrofitted bridge does fail in the earthquake and traffic disrupted, improvisations are still possible, both by the city departments involved and by commuters who individually or collectively organize alternatives. The respective interconnectivities, before and after, of course look very different.

Improvising after failure may seem like weak beer compared to the promise of better avoiding failure in the first place, but not foregrounding the necessity of improvisations (and improvisational skills) leads to confusion about “building in resilience” and its role in emergency management. All the money and political will beforehand won’t get rid of the key role of improvisation in emergency management. There is no planner’s workaround for improvisation.


So what? Isn’t that obvious?

If the necessity of improvisation in emergency management is obvious, not all the implications are.

A very major issue emerges when it comes to the role of politics after immediate response and initial service restoration, i.e., that is, as you move into longer-term recovery. For some interviewees, the transition out of a (more or less) command-and-control response, with its own clarity, logic and urgency, into a more inclusive, more politicized (read, more conflicted) recovery raises unavoidable governance issues.

What, they ask, are the organizational and management markers and decisionmaking criteria for resuming (bettered) civic operations? “At this point, it’s not been determined,” said an interviewee with long experience.

Nor is that question, along with the like question “When is ‘resilient-enough’ enough?,” in fact not determinable. Why? Because the granularity about the (latent and manifest) interconnectivities necessary in coordinating immediate response and initial service restoration is simply not possible, prospectively, for longer-term recovery.

A National Academy of Reliable Infrastructure Management

We need no further reminder than the 2021 Texas grid collapse and shutdown of the Colonial pipeline for how essential the real-time management of infrastructures is to people’s lives and livelihoods. Yet that management was the missing middle of President Biden’s mega-plan for new infrastructure construction and renovations. Nor is real-time management central to other initiatives like a National Infrastructure Bank, proposed in 2007 and resurfaced in the policy mix through 2020 legislation.

Providing the missing management means banking expertise the nation already has to operate these large, changing systems. They range from energy grids, urban water supplies and flood protection to telecommunications, vessel transportation and aviation, along with others. If anything, the legislation and initiatives will increase the need for real-time professionals to correct for inevitable shortfalls that jeopardize systemwide reliability and safety.

A National Academy of Reliable Infrastructure Management would remediate the nation’s infrastructure crisis by enhancing and advancing that high reliability management. The management challenges are beyond the domains of engineering, economics and systems modeling, but of equal priority and urgency as those of the National Academies of Science, Engineering and Medicine.


High reliability management is understood in real-time infrastructure operations as the continuous and safe provision of what are considered to be critical services, even during (especially during) turbulent and changing times.

The Academy would bring together an under-recognized class of experts in that real-time management from around the world. These reliability professionals include, most important, infrastructure control room operators (often with long experience and variable formal education), along with their managers and their immediate expert support staff (more likely to have higher formal degrees).

The Academy would promote their participation through projects, studies, and other advisory and convening activities. The mission is to examine, assemble and advance evidence-based findings for real-time reliability and safety management of infrastructures under 21st century conditions. In doing so, the Academy would provide the heft in facilitating research access to major control centers whose entry has been restricted for proprietary or security reasons.

Why an Academy for infrastructure management? As demonstrated repeatedly, large critical infrastructures must be managed beyond their technology, formal designs, and published regulations. The Academy challenge is to ensure that the tasks and demands of the rapidly changing infrastructure technologies are matched to the people with the skills and expertise to manage them beyond inevitable glitches and contingencies.

As the Academy gained knowledge, it would foster the management expertise to better navigate the interdependencies and interconnections of critical infrastructure sectors. Doing so requires two tracks. Not only is the Academy’s attention on ensuring critical national services like water, electricity and natural gas, hazardous liquids transmission, and aviation, are provided when most needed, always right now without incident. It also means focusing on ensuring their reliable and safe interconnectivity: Natural gas is used for electricity provision, which supplies the water needed by refineries that process the hazardous liquids, including Jet A-1 fuel for aviation.


The challenge continues to be how to analyze and improve the interconnectivity as it is navigated in real time. No one is responsible for that high reliability management picture. An example illustrates the huge stakes in getting this right. Assume an explosion at a major natural gas reservoir has occurred. Presently, the disaster leads to root-cause analyses, a process of zooming down to determine why and what precipitated the explosion. This is the responsibility of staff in the infrastructure and its regulator of record.

Identifying causes of the explosion is obviously important to prevent further explosions from happening at this and other reservoirs. But knowing causes does not go far enough in making sure that other systems are managed reliably and safely in light of the disaster. Required at the same time is zooming up the system and across systems with which it is interconnected.

What happened to the real-time operations of the natural gas transmission as a whole during and after the explosion? What happened to infrastructures depending on natural gas for their own operations during the explosion and in their next steps ahead? To my knowledge, the regulators of record do not work together to answer the latter question, routinely or as a matter of priority.

Such questions would be of core concern to the new Academy. Was the control center for natural gas transmission able to compensate for loss of the reservoir in real time? Did the control room keep the crisis from spreading to other parts of its transmission and distribution systems, including the variety of end-use customers? How did the control room compensate, where did it stumble, and what are other parts of its system were vulnerable or not?

But more than zooming up through the system is required. In the same instant, we must know what happened because of the explosion to the critical infrastructures depending on its natural gas. Some may also have control centers: Were their operators able to maintain their respective system’s reliability and safety in the face of that explosion? Since natural gas is often interoperable with electricity, it is critical to determine if or to what extent the electricity infrastructure was affected by explosion.

These assessments are also necessary to keep infrastructures interconnected under complex and changing conditions. It’s safe to say that zooming down for a root-cause analysis has been far more common than zooming up and across. But only the former assessments highlight major vulnerabilities introduced when root-cause analyses are the basis for systemwide recommendations to ensure the disasters don’t happen again.

What is missing in root-cause analyses are the negative impacts, if any, of the recommended changes on high reliability management at the system and inter-system levels. Will the changes, when implemented, undermine the capacity of the infrastructure’s control room to prevent disruptions, such as explosions, that it had prevented in the past from cascading across the natural gas system or beyond?

No regulator of record or national body is tasked to answer that question about cascade potential and those entailed with it. That there are answers would be the purview of the new Academy of Reliable Infrastructure Management.


Infrastructure cascades are understandably of central concern, where failure in one system leads to failure in others. But system engineers and modelers often have a very different view about these than control room operators.

One objective, for example, of network of networks modeling of infrastructures has been identifying which nodes and connections, when deleted, bring the network or sets of networks most immediately to collapse. But not failing immediately is what we expect to find in managed systems. In fact, the datasets we have on really existing infrastructure disruptions show that most are managed so as not to cascade over into other infrastructures and that certain infrastructures, most notably in energy, have a greater potential for cascading.

Modelers defend their focus as one of identifying worst-case scenarios (e.g., in today’s highly charged cyber-security arena). But control room operators and staff live in a real-time world where “what-if” scenarios cannot be the only way to treat probabilities and consequences.

Real-time reliability of their systems as systems must also account for the run of cases and frequencies of past or like events and their precursors. Real-time operators wouldn’t be reliability professionals if they ignored that, in their systems, brownouts at time precede blackouts, some levees are seen to seep long before failing, and the electric grid’s real-time indicators of precursors to disruption or failure typically increase beforehand. Reliability professionals (not least of whom in major control centers that face thousands and thousands of daily cyber-attacks) have to be skilled in both systemwide pattern recognition and in localized “what-if” scenario formulation.

Their expertise also reflects its own real-time indicators of effectiveness. These indicators are rarely if ever recognized by the regulators of record or system models of interconnectivity. The Academy would be the nation’s advocate for that expertise and early warning signals.


It’s important to establish from the outset that the Academy would be advancing leading (not lagging) indicators of systemwide collapse. Just as important, the indicators already exist for monitoring critical infrastructures operating at, or beyond, their performance edges, e.g.:

  • The infrastructure’s control room is in prolonged just-for-now performance, which means operators find it more difficult to maneuver out of a corner in which they find themselves. (“Just keep that generator online now!” even though the generator is scheduled for outage maintenance).
  • Real-time control operators are pushed into working increasingly outside their established bandwidths for operations, in effect having to work outside upper and lower bounds of competent performance.
  • Control room operators find that a chokepoint in its infrastructure (a major bottleneck that cannot be worked around) is failing adjacent to the chokepoint of another infrastructure with which it is functionally interconnected.
  • The decision rules operators reliably followed before are now reversed: “Prove we can launch” becomes “Prove we can’t launch” (Challenger Accident); “Ensure a capital cushion to protect against unexpected losses” becomes “From now on, manage for expected losses” (2008 financial crisis).
  • Measurable real-time operational redesigns (workarounds) are no longer effective. Nor can systemwide patterns be recognized or what-if scenarios formulated with the level of granularity as in the past.
  • Instead of being driven by wide social dread of having a next major failure ahead, control room professionals are told their track record up to now is the benchmark for reliability.

No one has the institutional niche and wherewithal to direct and sustain the nation’s attention on measuring and monitoring these real-time tipping points and transitions. The Academy would find it an easier task to cut through all the noise, including typical objections about control rooms and their operators, so as to augment, update and prioritize the indicators list.


In my judgment, the principal objections to an Academy would not be its cost or clout, both of which would be very real. Rather, the real objections originate in complaints from other disciplines in infrastructure development: “Control room operators aren’t really experts, like the engineers and economists with whom they work” and “Control rooms aren’t innovative; in fact, they’re the opposite.” (The latter misconception is addressed in the next section.)

Major cultural differences have plagued engineers and control room operators and, more recently, “Ops (Operations)” and “IT (Information Technology)” staff. One engineer we interviewed called the control room, “neanderthals.” Economists and engineers assured us: Generally speaking, having to operate in unstudied conditions is a “risk” society must take in order to benefit from major technological advances.

Yet, control room operators continue to press for the specifics—What if this piece of new marketing software fails during the phasing out of those backups?—something we heard again and again, as one “go-live” date had to give way to another in an executive initiative to replace legacy systems in a major state control room. 

There are, of course, exceptions to such behavior. But no one reading should doubt the outsized importance of engineers, economists and system modelers relative to real-time system operators and wraparound support, at the center and in the field, when it comes to major infrastructure change and reform here.

The professional orientation of control operators to prevent systemwide failure is clearly orthogonal to disciplines and professions insisting it’s all but impossible to innovate if you’re not prepared to fail.

Equally telling, calls for new technologies and software to correct for “operator error” are routinely made (1) in the absence of calculations by economists of the everyday savings of disasters averted and (2) in spite of a system model focus on two states of operation, normal and failed, when it is during the intervening state of temporary service disruption that operators demonstrate their skills and use of indicators in restoring service back. These and other differences in professional orientations would be treated far more constructively by a free-standing Academy.


Control operators, to the extent they are acknowledged for their expertise, have been disparaged as hidebound with a “don’t fix what’s already working” mentality. The reality is that because things are not working in real time, control operators must innovate so as to maintain system reliability and safety then.

Three domains of control room innovation are core to the Academy’s mission:

  1. Control rooms as unique centers of systemwide innovation and evolution

It is not sufficiently understood by engineers, economists and system modelers that infrastructure control rooms are an historically unique organizational formation. (Here as elsewhere, I thank my research colleague, Paul Schulman, for the insight.) They have evolved over time to take hard systemwide decisions under difficult societal conditions that require a decision, now.

In fact, the evolutionary advantage of control rooms lies in the skills, expertise and team situation awareness of its operators to redesign in real time what prove to be incomplete or otherwise defective technology, design and regulation. More, meeting the high reliability mandate must be done so as not to threaten the limits of the system to operate as a whole. There are no guarantees here, but the expertise is required when “fool-proof” technology and designs are found, too frequently, to be otherwise.

The Academy would treat these specifically organizational and management practices, skills and core competencies with the priority and resources the nation deserves.

2. Importance of the reliability-matters test for other major technological innovations

It’s indisputable that innovations for infrastructures proposed by outside experts and consultants are required. To ensure viability, they must pass the reliability-matters test. Would the innovation, if implemented, reduce the task volatility that real-time operators face? Does it increase their options to respond to task volatility? Does it increase their maneuverability in responding to different, often unpredictable or uncontrollable, performance conditions?

Among many control room operators interviewed, I never met one who was against any innovation that increased options, reduced task volatility and/or increase performance maneuverability across changing conditions. I have, however, met economists, engineers and others who dismiss this reliability-matters test, as they also dismiss “only workarounds,” as proof of a control room’s “resistance to change.”

The Academy will not be able to stop the premature introduction of novel software and hardware into systemwide operations, but it can monitor their real-time management impacts and interconnected knock-on effects (as in the natural gas example and indicators list).

3. Control operators and support staff as innovators in systemwide risk assessment

Talk of “trade-offs” is ubiquitous when discussing new designs and technologies. Control operators and wraparound support see the real-time demands of their high reliability mandate along different lines. 

For them—as for the infrastructure-reliant public—reliability in real time becomes “non-fungible.” That is, high reliability can’t be traded off against cost or efficiency or whatever when the safe and continuous provision of the critical service matters: again, right now, without failure. No number of economists, engineers and system modelers insisting that reliability is “actually” a probability estimate of meeting a standard will change the real-time mandate that systemwide disasters must be prevented from ever happening.

Nuclear reactors must not blow up, urban water supplies must not be contaminated by cryptosporidium or worse, electric grids must not island, jumbo jets must not drop from the sky, irreplaceable dams must not breach or overtop, and autonomous underwater vessels must not hazard the very oil rigs they are repairing. That disasters can or do happen reinforces the dread and commitment of the public and control operators to this precluded-event standard.

The better practices for high reliability management developed and modified across runs of different cases and infrastructures would be the Academy’s principal subject. The Academy’s ambit would be worldwide in this regard and well beyond published best practices of professional societies and industry associations only.

Infrastructure mandates for managing and innovating reliably and safely are, in short, not going away. Nor can they, even when systems are necessarily smaller, more decentralized, less interconnected, and more sustainable. Those systems too will be managed as if peoples’ lives and livelihoods depend on it—because they do.

Principal sources

High Reliability Management and Reliability and Risk (2008 and 2016 respectively from Stanford University Press and co-authored with Paul R. Schulman). A summary can be found in E. Roe and P. Schulman (2018). “A Reliability & Risk Framework for the Assessment and Management of System Risks in Critical Infrastructures with Central Control Rooms,” Safety Science 110 (Part C): 80-88

For a shorter version of this blog, see “A National Academy of Reliable Infrastructure Management.” Issues in Science and Technology (August 3, 2021), accessed online at https://issues.org/national-academy-reliable-infrastructure-management-roe/

Market contagion, financial crises and Girardian economics (resent)

–That people act in an imitative fashion under conditions of high economic uncertainty is not news: Panic selling, spiraling inflation, overheated art markets, and speculative frenzies (I sell when you sell, buy when you buy) are some of the many instances of imitative economic behavior.[1] 

What’s bothered me, though, is the relative lack of reference in the economic literature to René Girard’s theory of mimetic contagion (mimetic desire, in his terms). Girard’s framework has major implications not identified by economists writing on market contagion and associated crises.

Brief description of a Girardian economics

From a Girardian perspective, financial and economic uncertainty begets ever more uncertainty, as more and more people imitate each other in a desperate rush to figure out what to do. At some point, classes of people are arbitrarily identified (scapegoated in Girard’s terms) as the cause of the crisis, widespread violence ensues against or because of them, and new financial and economic institutions emerge from the hostile, violent conditions.

Most economic contagion models do not go that far in predicting violence (to be clear, predicting does not mean advocating).[2]

Contagion modelers argue that the way to break the cycle of imitation is through more accurate information. Girardians will have none of that. They insist the underlying and overwhelming problem is pervasive uncertainty for which there is no recourse to “certainty” to solve. Appeals to “market fundamentals” or “getting back to normal” stabilize temporarily, and that is at best only. Such appeals do not and cannot resolve the baseline widespread uncertainty that corrodes each and every stabilization effort.

In a Girardian economics, the more uncertain things are, the more wealth we desire to buffer against that uncertainty; but the more wealth we have, the more desire we have for even more wealth. The specification of wealth itself becomes increasingly problematic as uncertainty persists. Once wealth “ceases to be identified with the instituted money, [economic agents] no longer know behind which mask it is hiding.  Stocks, real estate, gold, foreign currencies, primary commodities, etc. attract the anxious attention of individuals looking for likely refuges from the ‘terrible oscillations of chance’,” the economist, André Orléan, writes.

This leads to what Girard calls a crisis of undifferentiation. Uncertainty becomes everywhere intensified; economic behavior grows more and more uniform; and ever more wealth becomes desired as “what is wealth?” becomes increasingly difficult to answer. Markets undergoing crises of undifferentiation—epidemics of contagion where everyone ends up imitating each other—are instances where we do not know enough to distinguish, in econo-speak, satisficing from maximizing or the second-best from the Pareto-optimal, and where no one is clearly right but where everyone hopes they are.

Girardian features of the 2008 financial crisis

This sense of free-fall and groundlessness is neatly captured in the comments of bankers and investors just before and during the 2008 collapse of Lehman Brothers. “It feels as if we are 15 minutes away from the end of the world,” the head of equities at a large U.K. bank told the Financial Times about the lead up to the first major U.S. bailout.

“The market has changed more in the past 10 days than it had in the previous 70 years,” reports a senior executive at a European investment bank in 2008. “We have no idea of the details of our derivative exposures and neither do you,” conceded a senior Lehman Brothers official at a meeting of bankers and regulators just before it collapsed. “The crisis continues because nobody knows what anything is worth,” said one informed observer. The chair of Morgan Stanley Asia concluded, “We have gone to the edge of an abyss that few thought was ever possible”. I can find no reports of financial experts appealing to “underlying” market fundamentals during these weeks.

For Girardians, people under these conditions—these crises of undifferentiation—respond by scapegoating. Scapegoating provides the certainty to move on. Reports from and about the last quarter of 2008, with the collapse of Lehman Brothers, the bailout of Freddie Mac and Fannie Mae, and the further bailouts of AIG and Citigroup, were replete with terms such as “panic,” “herd instinct,” “mob mentality,” “mob rule,” “witch hunting,” “finger-pointing,” “lynching,” and “show trials” along with the ubiquitous referencing of “scapegoats” and “scapegoating” (all terms from contemporaneous reports in the Financial Times).

Many commentators, of course, believed they were in fact correct in their blaming this one or that one for the crisis. Girardians argue, in contrast, that the choice of scapegoat is completely arbitrary, where pervasive uncertainty drives economic behavior. Some of this arbitrariness was witnessed in the belief that if no one is to blame, then everyone is. We were told “there is enough blame to go around for every one” and “we are all to blame for the meltdown.”

In heated financial markets where everyone is buying or selling at time t+1 because, well, most everyone was buying or selling at time t, there is no way to validate that selling save by stating it is what everyone else was and is doing. This point was famously made by Chuck Prince, former head of Citigroup, when he told the Financial Times in mid-2007, “When the music stops, in terms of liquidity, things will be complicated. But as long as the music is playing, you’ve got to get up and dance. We’re still dancing.”

But where’s the blood?

Its focus on an ensuing violence, however, is what sets a Girardian economics apart from other contagion models. A full-blown Girardian economics, at least as I understand it, would hold that imitative behavior goes beyond the scapegoating. It turns into mob behavior, not as a reporter’s hyperbole but in actuality. People are killed, and it is only after widespread violence that people respond in revulsion to their behavior by forging social and economic conventions so that such violence “never happen again.” In this view, new economic and financial institutions arise only after panic and mob-like behavior and the post-hoc rationalizations for what all the preceding “really” meant.

We certainly heard calls from politicians and regulators alike for “never again,” when it came to the 2008 financial crisis. So too, a manager or two committed suicide or disappeared from the scene. But it is an odd sort of crisis when those harmed on such an unprecedented scale did not take screaming to the streets. In 2008, we witnessed food riots over crop prices but no real violence over this massive wealth destruction. Which prompts the question: “Where’s the blood?, as René Girard asked me when I presented my version of a Girardian economics at his Stanford seminar.

I suppose some of it is there if we look for it. Already well documented, murder and suicides and violence do go up during a severe economic downturn like the one to which this financial crisis led. This, however, scarcely qualifies in Girardian economics as mob behavior essential for the rise of new social convention and institutions governing finance and economics.

So what happened?

In July 2009, former Treasury Secretary Henry Paulson testified before Congress on his involvement in the financial crisis. He admitted he had been deeply concerned about frightening the public if he expressed his real fears about the financial system unraveling: “[W]hen a financial system breaks down, the kinds of numbers that we were looking at in terms of unemployment was [sic] much greater than the numbers we’re looking at now. People in the streets, and of course, around the world—it was very significant and I remember talking about it…”.

But people did not take to the streets. Why?

Girardians, as I understand them, would resist two popular “answers:” (1) government interventions worked, and/or markets went back to fundamentals; and (2) it is too early to say how things are working out. As such argued, Girardians would have expected considerable violence during and after events of September/October 2008, and there is no chance, as I understand them, that such reforms to the financial system as there were would ever make things more “certain” in the absence scapegoating and ensuing violence.

My answer

There are at least four ways in which a crisis of economic undifferentiation could be delayed, albeit not averted, when comes to market contagion. More, if I understand Girardians, these four ways are the value added to contagion models of the financial crisis already proposed by mainstream economists:

If you can’t reduce pervasive uncertainty, the next best alternative is to impede the resulting rivalry (“increase the costs of rivalry”);

If you can’t reduce the rivalry, the next best alternative is to impede the associated imitative behavior (“increase the costs of imitation”);

If you can’t reduce the rivalry or imitation, the next best alternative is to foster and prolong differentiation (“decrease the costs of differentiation”); and

Lastly, if you cannot do any of the above, the alternative is to slow down or wait out the crisis of undifferentiation (“increase the costs of undifferentiation”).

These actions are, I believe, what have been happening by way of the financial and economic reforms undertaken since 2008. Their effect has been to delay the consequences of the financial crisis by sidelining the scapegoating. Let’s examine each in more detail:

Increase the costs of rivalry.

In the Girardian framework, markets are mechanisms to increase the transaction costs associated with rivalry, not decrease them, as conventional economic theory would have it.

Markets are what keep us from killing each other for the goods and services we desire. They transform us into price takers rather than commodity thieves. What happened in the lead up to the 2008 financial crisis was just such increased thievery (e.g., insider trading and predatory lending). Many existing and proposed reforms—most notably, increasing capital adequacy reserves in banks and lending institutions—have been intended to make the excesses of rivalry too costly to undertake.

But increasing the costs to rivalry poses a dilemma from a Girardian perspective. To increase their costs may lessen that rivalry, but the higher costs serve as an incentive for increasing the wealth needed to cover (buffer against) the now-higher costs associated with rivalry. For Girardians, it is no surprise that firms, such as Goldman Sachs and JPMorgan Chase, were driven to return to wealth-making faster than would have been expected given the economic conditions and liquidity shortages said to exist at the time.

Increase the costs of imitation.

The principal feature of the lead-up to the 2008 financial crisis was that costs of imitation were too low. Behavior, as many pointed out, became positively correlated, when finance theory insisted such behavior should have been uncorrelated through risk dispersion.

Instead of diversification and risk spreading, hedge funds and others ended up acting in very similar ways. Either “[t]oo many funds bought the same assets” or the “problem was that, while these assets are heterogeneous, the owners were not. In tough times they behaved the same way….Diversification was therefore fake”. “Far from promoting ‘dispersion’ or ‘diversification’ [financial] innovation has ended up producing concentrations of risk, plagued with deadly correlations,” according to a Financial Times’ correspondent at the time.

Calls for “increased transparency” are routinely given as the solution to this problem. Risk cannot be concealed or obscured if financial processes are transparent, so this argument runs. From a Girardian perspective, such calls are self-defeating. Greater transparency would reveal the financial system is transparently complex and in many ways visibly beyond human comprehension when it comes to measurable risk and unmeasurable uncertainty. At worst, everyone sees the system for what it is, a house of cards impossible to shrink through “better risk management” or shrink-wrap with “better macro-prudential regulation.”

Either way, calls for greater transparency would lead people to becoming even more rivalrous as they hunt for ever greater wealth to protect or buffer themselves.

Decrease the costs of differentiation.

Now things get really interesting. You saw everywhere in the 2008 financial crisis the insistence of major participants that each differed from the others and that they were not—repeat, not—all alike.

Hedge funds insisted they did not start the financial crisis but that banks and investment houses did; the latter institutions insisted they were not all the same, some were better (or accused of being worse) in managing securitized assets; not all securitized assets were the same—that is, all toxic; more, not all toxic assets were equally valueless; still others argued that it depended on the valuation procedure used and few agreed which was the better one; no over-arching agreement, moreover, because the regulators themselves did not agree….; and so on.

Against a Girardian background, this sustained insistence on differentiation, even as finance and banking were in the midst of uncertainty, is especially important to note. Circumstances remained, at least in the minds of the finance sector, differentiated in major forms before and through the crisis. Very different social conventions emerged with respect to financial transactions, and the conventions evolved and innovated at that time as they diffused through institutions and among their participants. While accusations of “You’re all the same!” reached near fever-pitch, banking and finance services were still far from being homogenous and uniform, even during the crisis and the Great Recession that followed.

In other words, the blame game remained cheap throughout the 2008 crisis: The costs of differentiation were lower than one would have expected in a full-blown crisis of undifferentiation. I return to this point in a moment.

Increase the costs of undifferentiation.

A last strategy is to wait out the financial collapse in the hope that the longer people hold out before the crisis of undifferentiation becomes total, the more likely undifferentiation will not be total nor the contagion completed in full-blown scapegoating. One way to make undifferentiation “cost more” is to fuel the rumor mill about the who, why, how, when and where of the financial crisis, since it takes time to settle a rumor. (Small beer, but beer nonetheless.)

Since 2008, we have had an incoming tide of books and publications that keep all manner of whodunit suspicions and fevers alive. Rather than narrowing down identification of those who are “really” to blame, we have a surfeit of candidates said to have caused or contributed. In fact so many that some take the 2008 financial crisis to have been overdetermined. Instead of knowing who is to blame, we are encouraged to conclude, “With all that was going on, it would have been a miracle if the financial crisis didn’t happen!” Error here has many fathers when reliability is orphaned.

–In short, scapegoating has become difficult to complete during and since 2008, thereby diffusing the prospect of violence and the rise of new financial institutions: . . . so far, Girardians underline. Yes, scapegoating has begun, some violence has been witnessed, but there has yet to be polarization around one scapegoat or defined set of them. Or from the other direction, what polarized agreement that has occurred has been more around phenomena—notably, rising inequality—than on specific groups or classes of agents.

Yet even if the financial crisis were not the one predicted by a purely Girardian economics—how could it be a crisis of undifferentiation and scapegoating without the violence?—it is remarkable how well the four types of interventions just described fit the course of events as we know them today.

–Which raises a last caveat. The 2008 financial crisis is a handy case study, not only because it’s part of living memory, but also because the literature that followed helps track the aftermath in terms of the four strategies discussed.

As such, arguably the best lesson learned so far is that it can’t be assumed widespread uncertainty is pervasive uncertainty. There are extreme events where widespread uncertainty comes to us as separable uncertainties—more differentiated and differentiable than might first be supposed.


[1] Conlisk (1980) wrote about the widespread importance of imitation in economic behavior. Topol (1991) focused explicitly on mimetic contagion in investment behavior. Scharfstein and Stein (1990) and Banerjee (1992, 1993) modeled herd behavior among investors. The critical-mass (“tipping”) models of Schelling (1978) and Akerlof (1984), as well as the “informational cascades” model of fads and cultural change developed by Bikhchandani, Hirshleifer and Welch (1992), captured the notion that, under uncertainty, economic agents end up copying each other’s behavior. Most famously, Nobel Laureate in Economic Sciences, Robert Shiller (e.g., Shiller and Pound 1989; Shiller 1989, 2006) writes about and focus on contagion models in investment and the strategic role of imitation among investors. He argues, for example, that the subprime mortgage crisis and the 2008 financial crisis that followed had a great deal to do with “the contagion of market psychology” that led to bubbles under the boom conditions of the turn of the century (Shiller 2008). More recently, Shiller (2019) has focused on the role of narratives in the spread of and response to market contagion and crises.

[2] Not all economists who rely on the Girardian framework focus on violence as the instigator of new economic arrangements. Scholars such as Jean-Pierre Dupuy, Mark Anspach, Paul Dumouchel, and André Orléan, among  others, have applied aspects of Girard’s contagion model to economics and related topics. In my view, the most notable application is that of economist, André Orléan, in his The Empire of Value: A New Foundation for Economics (translated by M.B. DeBevoise, 2014, The MIT Press: Cambridge, MA.). Violence is not a key feature of his analysis of money and the 2008 financial crisis in that book. (See also Orléan 1988, 1989, 1992a,b, 1998.)


Akerlof, G., 1984. A theory of social custom, of which unemployment may be one consequence. In An Economic Theorist’s Book of Tales. Cambridge University Press, Cambridge.

Banerjee, A., 1992. A simple model of herd behavior. Quarterly Journal of Economics 107, 797-817.

————— 1993. The economics of rumours. Review of Economic Studies 60, 309-327

Bikhchandani, S., D. Hirshleifer and I. Welch, 1992. A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy 100, 992-1026.

Conlisk, J., 1980. Costly optimizers versus cheap imitators. Journal of Economic Behavior and Organization 1, 275-293.

Orléan, A., 1988. Money and mimetic speculation. In P. Dumouchel, editor. Violence and Truth. Stanford University Press. Stanford, CA.

————, 1989. Mimetic contagion and speculative bubbles. Theory and Decision 27, 63-92.

————, 1992a. The origin of money. In F. Varela and J-P Dupuy, eds. Understanding Origins. Kluwer Academic Publishers. Netherlands.

———— (co-authored with Robert Boyer), 1992b. How do conventions evolve? Journal of Evolutionary Economics 2, 165-177.

———–, 1998. Informational influences and the ambivalence of imitation. In: J. Lesourne and A. Orléan (Eds.) Advances in Self-Organization and Evolutionary Economics. Economica: London.

Roe, E., 1996. Sustainable development and Girardian Economics. Ecological Economics 16, 87-93. The article is the principal source for this blog entry, though material from the original has been updated substantially.

Scharfstein, D. and J. Stein, 1990. Herd behavior and investment. The American Economic Review 80, 465-479.

Schelling, T., 1978. Thermostats, lemons, and other families of models. In Micromotives and Macrobehavior. W.W. Norton and Company, NY.

Shiller. R., 1989. Stock prices and social dynamics. Fashions, fads, and bubbles in financial markets. In Market Volatility. The MIT Press, Cambridge, MA

————, 2006. Irrational Exuberance. 2nd Edition, Paperback, Broadway Business.

————, 2008. The Subprime Solution:  Today’s Global Financial Crisis Happened, and What to Do about It. Princeton University Press: Princeton, NJ.

————, 2019. Narrative Economics: How Stories Go Viral & Drive Major Economic Events. Princeton University Press: Princeton, NJ.

Shiller, R. and J. Pound, 1989. Survey evidence on diffusion of interest and information among investors. Journal of Economic Behavior and Organization 1,: 47-66.

Topol, R., 1991. Bubbles and volatility of stock prices: Effect of mimetic contagion. The Economic Journal 101, 786-800.

The school economist


When rolling electrical blackouts take place, we ask our friends, the economists, why.

After a blackout, one of them tells us it was because of all that underinvestment in the transmission grid you get when treating the grid as a public good. During a blackout, another assures us that having to shed load reflects the negative externalities associated with prices not reflecting electricity’s full cost to consumers, who “thus” over-consume and overload power lines.

Before a blackout, a different economist says energy deregulation will guarantee the reliability we want because it reflects the Efficient Market Hypothesis–remember, the idea that won a Nobel Prize–where nothing can be better than market prices in reflecting what is known about energy supply and demand, like our willingness to pay for transmission.

To which still another adds: Whether or not there is a blackout at all, rational expectations theory–remember, the idea that won another Nobel Prize–tells us that policy interventions are ineffective anyway.

If we aren’t sufficiently convinced and press our friends about what we should do to prevent blackouts altogether, they tell us not to worry—as long as electricity services are in market equilibrium, with reserve margins optimal, everything is okay.


But, hey, why ask economists at all? They’ll tell you no one listens to them anyway.

“Energy deregulation was never really tried,” they plea. Society never reallyReally tried thorough-going cost-benefit analysis, economics in law and regulation, market designs engineered for efficiency, and far greater use of RCTs (as in: Economists are society’s plumbers–so please, don’t step on our randomized controlled trials!)

And when an economic market design is adopted whole hog, we have the 2021 Texas power and grid debacle: In the view of the Harvard economist who designed it, the energy market “worked as designed.” “It’s not convenient,” he added, “It’s not nice. It’s necessary.”

All of which is a bit like wanting to believe the Cultural Revolution would have succeeded if only Madame Mao and the Gang of Four were really given the chance.