Better fastthinking in complex times

–The ability to think fast on one’s feet has always been at a premium. Indeed, fastthinking has been the order of the day for those policy types who operate under the ying and yang of crisis management and leaving important decisions to the last minute.

Clearly, fastthinking is not conventional trial and error learning, as the conditions for such learning––low environmental uncertainty, stability in goals and objectives, and persisting institutional memory––are missing from much of the relevant policy world. Nor is it the message-in-the-bottle (Flaschenpost) approach, where you do your research, throw it upon the still waters of hard drives, and hope that someone, somewhere, sometime, retrieves it and treat your message seriously.

fastthinking is just-in-time-thinking to match just-in-time schedules in just-interrupted task environments. That’s the upside. The downside is that timely feedback, prompt response and rapid adaptation are purchased by discouraging (more) deliberation and reflection. The common remedy recommended: Slow fastthinking down. Be deliberative. Think things through. But that’s the problem: We have less time to slow things down, and even less time to make the decisions.

–What to do then? Focus here on one principal effect of a fastthinking likely to stay around indefinitely: namely, the greater the pressure to take decisions now, the greater will be the pressure to rely on existing policy narratives. Where so, it seems obvious to me that the better policy narratives we rely upon have to become more complex.

In my view, a better policy narrative meets three criteria:

  • The narrative—its story with beginning, middle and end, or argument with premises and conclusions—is one that takes seriously that the policy or management issue is complex, uncertain, interrupted and/or conflicted.
  • The narrative is one that also moves beyond critique of limitations and defects of the reigning policy narrative (criticisms on their own increase uncertainties when they offer no better storyline to follow).
  • The narrative tells a better story than the reigning narrative(s). It gives an account that, while not dismissing or denying the issue’s difficulty, is more amenable or tractable to analysis, policymaking and management. Indeed, the issue’s complexity offers up opportunities to recast a problem differently and with it, potential management options.

–With that in mind, let me jump to the quick with two examples of what I mean by more complex policy narratives tailored to fastthinking, in this case in the environmental arena:

1. All major ecosystems are complex, and none more so than the planet as an entire ecosystem. Ecosystems are being managed so poorly, but there are ways to take action now in advance of results of long-term research, study and experimentation. Much more needs to be done to bring ecologists (including conservation biologists, climatologists, and hydrologists, among other natural scientists) into direct operations of large-scale systems. There, ecologists would not only be better positioned to undertake or promote long-term and large-scale studies and interventions, but more important provide real-time (a.k.a. fastthinking) advice for real-time problems affecting critical services, including but not limited to water and energy, based in ecosystem processes and services.

2. Think of advanced ecological management as utilizing authoritative websites, one of which might be http://www.ecological_management.org, maintained by, say, the Ecological Society of America [or other organization/country of interest].

An authoritative website provides sought-after, up-to-date and linked knowledge so quickly and reliably that it is continuously browsed by increasing numbers of users who click on the website early and often in their search for on-point information, in this case about ecology-based management. These websites do not pretend to provide final or definitive information, but rather seek to assure and ensure the quality of the topical information continually up-dated.

The website serves as a clearinghouse that encourages cross-checking and tailoring of information on ecological management, while also acting as a springboard for future information search and exchange. It is popular because it shortens the number of steps it takes to move from place to place in search of salient information.

In this scenario, the analyst or manager starts her analysis on ecology-based management by searching http://www.ecological_management.org. She goes to the website on the well-established principle that information becomes increasingly policy or management relevant when the people gathering the information are the ones who actually end up using that information. That is, the authoritative website is constructed and maintained to make searching and browsing easier for the policymaker herself.

Do such websites already exist for ecological and environmental managers (let alone for other major policy and management issues)? When it comes right down to it, do we find many real-time ecologists in infrastructure control rooms across the world?

Not “Why don’t they listen to us?” but rather: “What should we listen for from them. . .”

We want policymakers and politicians to treat our research and analysis seriously, but we rarely turn the cart around and ask: What more should we be listening for from them beyond the substance of what they are saying? It’s not just what they say but how they say it.

How would we identify those who talk as if they’d listen to what we have to say? How do we identify policy types where no amount of our research and analysis would ever be sought? In short, what are we missing that’s right in front of us as they articulate what they’re saying?

For me, two sets of positive statements stand out indicating the kind of receptivity to research and analysis we would like to hear from policy types:

“with respect to,” “under what conditions,” “this is a case of”. For example, it’s risks and uncertainties with respect to these failure scenarios and not those that we should be worried about. It’s under those conditions and not these that we take action. What we are talking about is something different, its being a case of . . .

“Here’s our track record…,” “Here are our measures of success…or failure”. Did what actually happened match what was originally proposed? Or, how does what actually happened compare to the success record of others in like situations? Or, what would have happened even had not the policy been implemented?

These statements (and variants) reduce to versions of “yes, but” or “yes, and,” and in so doing indicate the willingness and the ability of the speakers to identify differences that matter for policy and management.

What, though, about the negative statements to be listened for? Am I the only one who trembles when some senior government officials says of a particularly tricky state of affairs, “We need to clear the table and make a fresh start“? Dangerous dumbing down is occurring when you hear this and the like from policy types:

–“It’s a win-win, so who can be against it?” (when everyone within hearing distance knows winners rarely if ever compensate losers), “We just need the political will” (when obviously we’ve had too much political will in committing to any and everything), “If implemented as planned” (when the entire point is you cannot assume any such thing); and

–“It’ll pay for itself” (when costs, let alone benefits, can’t be measured, aren’t evenly distributed nor even collectively borne), “We must do this at all costs” (when what the policy types are really doing is refusing to tell you the likely ones), and “Failure is not an option” (when failure is always a very real possibility in complex situations).

And yes, we did better in the Marshall Plan, the Moon Landing, or other Standalone. But there are no guarantees that “just because” we did that once, we’re able to do it for an entirely different type of problem, like eliminating racial discrimination or income inequality. Instead, what we want to hear from policy types is, “Here’s what to do even now. . .

It’s no one else’s responsibility but ours to sharpen our skills in listening-out-for when it comes to policy talk. The duty is to listen out for those willing and able to dial in details for the very different answers to: What do we know? What should we do? What can we hope?

(Special thanks to Paul Schulman in thinking through and wording some points.)

Killing cognitive reversals

What else can we do, the senior executives and company boards tell themselves, when our entire business is on the line? We have to risk failure in order to succeed.

But what if that business is in a critical service sector? Here, when upper management seeks to implement these risk-taking changes, they rely on middle-level reliability professionals, who, when they take risks, only do so in order to reduce the chances of failure. To reliability-seeking professionals, the risk-taking activities of upper management look like a form of suicide for fear of death.

–When professionals are compelled to reverse practices they know and found to be reliable, the results are deadly. In the Challenger accident, engineers had been required up to the day of that flight to show why the shuttle could launch; on that day, the decision rule was reversed to one showing, “beyond a shadow of a doubt,” why launch couldn’t take place.

Once it had been good bank practice to hold capital as a cushion against unexpected losses; new capital security arrangements mandated that they hold capital against losses that must be expected from their high-risk lending. Also contributing to the 2008 financial meltdown was that mortgage brokers traditionally made money on the performance and quality of mortgages they made; then their compensation changed to one based on the volume of loans originated but passed on.

The Deepwater Horizon rig had been drilling an exploration well at the Macondo site; that status changed when on April 15 2010 BP applied to the U.S. Minerals Management Service (MMS) to convert the site to a production well, a change approved by the MMS. The explosion occurred five days later.

–In short, there is ample evidence that reversals of important decision rules that require professionals in high-stakes situations to turn inside out the way they had managed for reliability have led to system failures and more: NASA was never the same; we are still trying to get out of the the 2008 financial mess and the Great Recession that followed; the MMS disappeared from the face of the earth.

Forcing cognitive reversals on the part of reliability operators and operators—that is, exile them to conditions they do not know but are told they must nonetheless be skilled for—is the surest way to throw acid into face of high reliability management.

–“But,” you counter, “that’s a strawman. Of course, we wouldn’t deliberately push reliability professionals into unstudied conditions, if we could avoid it.”

Really?

The often-heard and oft-recommended approach, Be-Prepared-for-All-Hazards, looks like the counsel of wisdom. It however is dangerous if it requires emergency and related organizations to cooperate in ways they currently cannot, using information they will not have or cannot obtain, for all manner of interconnected scenarios, which if treated with equal seriousness, produce considerable, if not massive modeling and analytic uncertainties.

An all-hazards approach, if actually implemented, pushes professionals mandated to be reliable into having “to operate a critical infrastructure in prolonged unstudied conditions,” a management contradiction if ever there was one.

Table of key entries

Most Important: “What am I missing?,” “Complexity is the enemy of the intractable,” “Power,” “Interconnected?,” “I believe,” “Wicked problems,” “Even if what you say is true as far as it goes, it doesn’t go far enough…,” “Time as sinuous, space as interstitial: the example of total control,” “Keeping it complex. . .,” ““Long-terms, short-terms, and short-termism,” “More on over-complexification,” “Playing it safe, utopia,” “Case-by-case analysis: realism, criteria, virtues,” “Not ‘Why don’t they listen to us?’ but rather: ‘What should we listen for from them. . .’”

Recasting big policy issues: “Poverty and war,” “Second thoughts on income inequality,” “Surprising climate change,” “COVID-19,” “Missing racism,” “Healthcare,” “To-do’s in the Anthropocene, ” “The market failure economists don’t talk about: Recasting infrastructures and the economy,” “Culling sustainability,” “In a failed state,” “Revolts,” “A colossal inheritance,” “Wicked problems,” “Making the best of linear thinking, complexly: typologies for reframing ‘coordination’,” “Government regulation,” and Longer Reads (below)

More recastings: “Policy narratives,” “America’s and Trump’s,” “Recastings #1,” “When the light at the end of the tunnel is the tunnel,” “Loose ends, #3,” “Public Policy Analysis, c.1970 – c.2020: In Memoriam?,” “Sound familiar? Here’s why,” “A grammar of policy analysis,” “Bluejays, fists and W.R. Bion,” “Policy as magical thinking,” “A different take on ‘traditional agriculture:’ risk-averse v. reliability-seeking”

Not-knowing and its proxies: “Seeing unknowns,” “Inexperience and central banks,” “Managing inexperience,” “Difficulty at risk and unequal,” “By way of distraction…,” “Shakespeare’s missing lines still matter”

Ignorance and uncertainty: “When ignorance does more than you think,” “Optimal ignorance,” “Uncertain superlatives,” “Stopping rules and contested regulation,” “To-do’s in the Anthropocene,” “Why aren’t they all running away!,” “Yes, ‘risk and uncertainty’ are socially constructed and historicized. Now what? The missing corollary and 3 examples,” “Killing cognitive reversals”

Risk, resilience and root causes: “A new standard for societal risk acceptance,” “Easily-missed points on risks with respect to failure scenarios and their major implications,” “Risk criteria with respect to asset versus system scenarios,” “Half-way risk,” “Central role of the track record in risk analysis,” “Resilience isn’t what you think,” “Root causes,” “Frau Hitler, again,” “With respect to what?,” “Yes, ‘risk and uncertainty’ are socially constructed and historicized. Now what? The missing corollary and 3 examples”

Regulation: “A few things I’ve learned from the Financial Times on regulation,” “Government regulation,” “Stopping rules and contested regulation”

Infrastructures: “The real U.S. infrastructure crisis,” “Innovation,” “Take-home messages,” “Who pays?,” “When high reliability is not a trade-off,” “The market failure economists don’t talk about: Recasting infrastructures and the economy,” “When ignorance does more than you think,” “Catastrophizing cascades,” “Healthcare,” “Interconnected,” “Stopping rules and contested regulation,” “Achilles’ heel of high reliability management,” “Where distrust and dread are positive social values,” “To-do’s in the Anthropocene,” “Government regulation,” “Killing cognitive reversals”

Environment: “New environmental narratives for their End Times (longer read, consolidated from following entries),” “Nature,” “Tansley’s ecosystem,” “Radical uncertainty and new environmental narratives,” “Eco-labelling recasted,” “European Union Emissions Trading Scheme, Scenes I and II,” “To-do’s in the Anthropocene,” “Dining on gin and consommé,” “Culling sustainability,” “Lifecycle modeling of species,” “Better fastthinking in complex times”

Catastrophe and crisis: “Catastrophizing cascades,” “Jorie Graham’s systemcide,” “The shame of it all,” “Next-ism,” “The future is the mess we’re in now,” “Killing cognitive reversals”

More mess, good and bad: “A different take on the traffic mess,” “Happiness: The mess,” “Who pays?,” “Misadventures by design,” “. . .and raise my taxes!,” “Loose ends, #2,” “Top-of-the-list thinking,” “Take-home messages”

Betterment and good-enough: “Betterment as ‘yes-but’ through ‘yes-and’,” “It’s better between the James brothers,” “Good-enoughs,” “Good-enough dreamers,” “Professional, amateur, apprentice; Or, As good as the fingernails of Manet,” “‘at sea,’ ‘from on high’,” “Betterment (continued),” “Better fastthinking in complex times”

Policy palimpsests and composite arguments: “Take home messages,” “Blur, Gerhard Richter, and failed states,” “Time as sinuous, space as interstitial: the example of total control,” “More on policy palimpsests: The European Union Emissions Trading Scheme, Scenes I and II,” “Shakespeare’s missing lines still matter,” “Bluejays, fists and W.R. Bion,” “Reflection and sensibility,” and other Longer Reads (below)

Economism: “Economism,” “Keep it simple?,” “Loose ends, #1” “When high reliability is not a trade-off,” “Short and not sweet,” “The missing drop of realism,” “The market failure economists don’t talk about: Recasting infrastructures and the economy”

Longer Reads: “Ammons and regulation,” “The next Constitutional Convention,” “Recalibrating Politics: the Kennedy White House dinner for André Malraux,” “Blur, Gerhard Richter, and failed states,” “A consultant’s diary,” “A different take on The Great Confinement,” “Market contagion, financial crises and a Girardian economics,” “New environmental narratives for their End Times (consolidated from Environment entries),” “New benchmark metrics for major risk and uncertainty (consolidated from entries for Risk, resilience and root causes)”

Something less complex?: “Red in tooth and claw,” “What kdrama has taught me,” “The irony of it all,” “Dining on gin and consommé,” “Five questions everyone should want to answer”

A different take on “traditional agriculture”: risk-averse v. reliability-seeking

–A risk-averse farmer keeps multiple varieties of crops, livestock and/or sites so that, if one fails, s/he has others to fall back on. The more crops, livestock and sites a farmer can muster and maintain, the greater the chances s/he won’t lose everything. Where possible, the risk-averse farmer avoids hazards whose probabilities and uncertainties cannot be managed so as to maintain a survival mix of crops, livestock and productive sites. The risk-averse farmer faces a carrying capacity that sets exogenous limits on the total crops and livestock produced.

–A reliability-seeking farmer keeps multiple varieties of crops, livestock and/or sites because any single resource—e.g., the soil that sustains the crop, site and livestock—is managed better if it provides multiple services. The more crops, livestock and sites a farmer can muster and maintain, the greater the chances s/he can meet peak demands made on his or her production system. The reliability-seeking farmer seeks to manage the probabilities and uncertainties of hazards that cannot be avoided so as to maintain a peak mix of crops, livestock and sites. The reliability-seeking farmer faces a carrying capacity whose endogenous limits are set by farmer skills for and experience with different operating scales and production phases.

–Farming behavior, no matter if labelled “traditional” or “subsistence,” that is developed around high technical competence and highly complex activities, requires high levels of sustained performance, oversight and flexibility, is continually in search of improvement, maintains great pressures, incentives and expectations for continuous production, and is predicated on maintaining peak (not minimum) livestock numbers in a highly reliable fashion without threatening the very limits of system survival is scarcely what one would call “risk-averse.”

Case-by-case analysis: realism, criteria, virtues

Realism

–I remember reading there were eight principal approaches to government-as-economic-policymaker—-

  •             1.         Government as an exogenous black box
  •             2.         Government as a neutral extension or aggregation of private choice
  •             3.         Government as a nonneutral decision-making or preference-aggregating                         process
  •             4.         Government as an instrument of the powerful
  •             5.         Government as an instrument with which to check the power of the powerful
  •          6. Government as the source of problems, if not of evil, in society
  •             7.         Government as the source of progress
  •             8.         Government as part of the necessary framework of the market.

—-and thinking, “Really, there are four approaches only. . .”

In semiotic terms—a thing is defined by what it is not—there’s one approach centering around the dimension of neutrality and its opposite (2 & 3), another around that of the instrumentation of power and its counter (4 & 5), and the third around retrogression and progress (6 & 7). Numbers 1 and 8, while not mirror images, represent a sharp contrast between the known market and the unknown black box.

This semiotic reading underscores that approaches to government economic policymaking need not have been determined deliberately by policymakers. Structurally, to have one approach entails its opposite, if simply by stating what the former is not.

–Similar semiotics are found at work in the recommended steps to undertaking a policy analysis: define the problem, assemble the evidence, identify the alternatives, agree on the methods and criteria to evaluate the alternatives, evaluate the alternatives in light of their projected consequences, decide on your recommendation, and then communicate it in a way that is understood and actionable.

The steps are thus also coupled—problems and solutions like alternatives and consequences are reciprocally defined. But what is of interest here is how a step-wise approach entails its own frustration. Not only does taking a step imply that the step can be frustrated, but taking that step might frustrate subsequent steps. 

–It’s this marked opposition—e.g., define/not define, and so on for other steps—that gives the stepwise approach its realism.      

The more steps and time in addressing the issues, the more realism conveyed by that addressing. The more time the steps take, the more difficult it is, the more real it all becomes. Too many activities also means reducing the conditional probability that any of this activity will reach a conclusion.  In short, the more real these issues are the more these issues don’t have a chance of being real in any other way. It becomes its own case, to be evaluated in its own right.

Criteria

The traditional ex ante (before the event) criteria for a policy proposal are economic efficiency or cost-effectiveness, political and administrative feasibility, equity and legality, among others. When it comes to ex post (after the event) evaluation of actual policy performance, the criteria narrow down to variations of: Did policy implementation match the mandated goals of the policy? Complex policy and management issues are, moreover, also complex because we must expect that the goals will change, even during implementation.

Or to put that point differently, there are always different ex post criteria to evaluate any complex policy, five important ones being:

  • in terms of whether its implementation achieved its stated objectives;
  • against some ideal, which the policy’s objectives may or may not match;
  • against the implementation record of like policies;
  • in terms of what would have happened had not the policy been in effect (the “counterfactual”); and
  • in terms of whether savings could have realized if the policy had been more cost-effectively undertaken.

The very process of identifying which of these is or are the most important is often what is meant by “determining the weight of the evidence” or “deciding the case on its own merits.” Such is why complex policy analysis is a form of case-by-case analysis.

Virtues

–An earlier blog sketched some of the practical advantages of the case approach. Let me now provide more flesh and bone (in no order of priority):

  • You get to see and show how theory-based taxonomies and conceptual frameworks go only so far but rarely far enough when applied in the field;
  • You get to see not just politics but power at work—which is key for those of us loathe to talk abstractly about a bracketed-off [Power];
  • You get to see that policy and management are more complicated than politics, dollars and jerks. You get to see how easy it is to confuse the noise and mess as “intentions” of those involved. Other explanations—bureaucrats were mindlessly following rules—also turn out to be more complicated on closer examination;
  • The case level helps you differentiate and see some things afresh, as if for the first time, and about as close to “objectivity” as you get;
  • If your case study is across time, you’ll observe not just “normal” periods but also disruptions and more. (Here your career, and not just a specific task or job, is “the case.”) When it comes to policy and management, you’ll be able to map out the different positions and standpoints taken with respect to the efficacy of macro-design, micro-operations, system-wide pattern recognition and anticipation, and more localized contingency scenarios; and
  • It will come as no surprise that actual practice, actual behavior and actual implementation in the individual case study differ from the principles, policies and plans said to govern the former. This is so unexceptional that when things work as planned this must be a surprise worthy of its own research and study.

–If you were to summarize the six bullets, focusing at the case level brings a frankness and seriousness to your analysis and advice—what the ancients highly prized as parrhesia or frank counsel—about complications of power, context, implementation, and macro-design.

Things are more complex than we thought and because they are complex, they can be recast and seen in a new light. This is, I believe, the great virtue of the case approach: Seeing something extremely difficult in a fresh and productive light.

An example. I attended a presentation on an ecosystem restoration project in Montana. One of the project leaders described what was for him the key contribution of ecosystem management: He could now see how his forest acres in the valley fit into the wider landscape. The approach gave him a way to integrate the small and large scale, with cross-scale implications both ways.

In effect, his management perspective recoupled his acres to the landscape across scale and across time. Not only could the project leader stand in his woodlot and see how it fit in with the larger scale of the ecosystem and landscape, he was able to plan at the smaller scale for the longer term. He was able to expect a future for the forest. So too for the ecosystem manager standing at the ridge overlooking the valley, as she looks down and is able to plan at the larger scale for the shorter term. She can now see what the next steps ahead are when it comes to managing the entire ecosystem.

“Think globally, act locally” is recast as, “Think long term from the small scale, act real time from the large scale.” In this way, recoupling at the case level ends up recasting what had hitherto been an obviously coupled world, but shorn of these crucial details and specifics.

Lifecycle modeling of species

–Say you are involved in modeling the lifecycle of a listed species. You and your colleagues rightly start out ambitious by aiming to develop and then integrate sub-models for species reproduction, period-to-period, region-based species survival, movements between regions, and juvenile/adult mortality due to exogenous factors, such as human-made disaster.

It doesn’t take long to confirm what you and your colleagues suspected anyway that not only do pertinent data not exist, but modeling uncertainties and errors work against integrating current sub-models into a comprehensive lifecycle model (LCM).

Thereafter with time and funding, you and your colleagues develop much reduced versions, called LCM1, LCM2 and now LCM3, each bringing to light further refinements and significant methodological and data issues. You embark on developing LCM4 in the hopes that the research team—again funding permitting—are moving closer to identifying management interventions for the species.

The many technical reports (now approaching 50 in number) produced during the decade of research track the refinements, improvements, insights and difficulties in modeling species reproduction, movement and survival rates. The peer-reviewed literature on lifecycle models has been advanced in the view of many outside experts by this research.

–Unfortunately for a variety of reasons, none of the reports identify modeling and data uncertainties in a way that they can be contrasted to the uncertainties and errors made in the existing comprehensive model for managing said species.

What “comprehensive model,” you ask? Didn’t I say there was no comprehensive lifecycle model? I forget to tell you that, during all the years the modeling research, real-time deliberations of interagency staff and scientists continued with really-existing decisions, period-by-period, over the management of said species.

From time to time the consequences of the management actions find their way into a technical report, but even here modeling uncertainties hold center-stage: “Though it is tempting to interpret declines in estimated [mortality] as evidence of management success, models of population dynamics are required to disentangle. . .”

–You’d think that the burden of proof has been on the modelers to demonstrate that reliance on life-cycle models would lead to better results compared to the next best alternative of current interagency deliberations of scientists and support staff. . .

But, not to worry: The judge who mandated the research in the first place asserted way back when: “All experts agree that application of a lifecycle model is the accepted method for evaluating the effects of an action upon a populations growth rate.”

This means all we need do is assume management isn’t improving faster than the modeling. And what could make more sense in reality than doing what is so needed in theory?

New benchmark metrics for major risk and uncertainty (longer read)

Starting point for policy and management with respect to risk and uncertainty. The methodological demand is always: First differentiate! Differentiation matters especially with respect to risk and uncertainty. There is no such thing as risk or uncertainty on its own; it is always risk or uncertainty with respect to something.

The language of risk and uncertainty is now so naturalized it always seems the obvious point of departure, like filing alphabetically or chronologically: “The first thing we have to do is assess the risks of flooding here…” No. The first thing you do is detail the with-respect-to scenarios of interest.

To start with, you identify the boundaries of the flood system as it is actually managed and then the standards of reliability to which it is being managed (namely, events must be precluded or avoided by way of management) and from which derive the specific risks to be managed to meet standard(s). The risks follow from the standard to be met for the system as bounded for management in real time.

Why is this important? It means that benchmarks or metrics for risk and uncertainty are all about the details in the with-respect-to scenarios.

An example. Focus on an island in the western California Delta–for example, Sherman Island–and consider criteria that engineers rely on for establishing priorities with respect to reducing levee fragility there (the island’s encircling levees are needed because its productive areas are considerably below water level):

  • Criterion 1. Levee fragility priority can be set in terms of the weakest stretch of levee around the island, i.e., the stretch of levee that has the highest probability of failure (Pf). This has obvious implications for collocated elements from different infrastructures, e.g., a very high levee Pf should counsel against plans to place, say, a huge chemical tank facility next to it. (You’d assume commonsense would commend this as well.)
  • Criterion 2. Levee fragility priority can be set in terms of the stretch with the highest loss of life (and/or other assets) arising from levee failure. If the levee breaches where most island residents live, then there is less time for evacuation. Clearly, consequences of failure (Cf) are important here, and this criterion is about the levee stretch that has the greatest risk of failure, not just probability of failure. (Risk here is the product of Pf times Cf.)

Sherman Island’s weakest levee stretch, at the time of our research, was said to be on the southwest part of the island; the stretch with the greatest loss of life appeared to be on the eastern and south-east side with more residences. Other factors constant and from the perspective of Criterion 2, it is better in fact that the weakest stretch of levee (according to Criterion 1) is on the other side of the island, so as to ensure more time for evacuation.

–A third criterion, in contrast, reflects the extent to which the levee infrastructure of the island is part and parcel of a wider interconnected critical infrastructure system (ICIS):

  • Criterion 3. Levee fragility priority can be in terms of stretch that has the greatest risk to the entailed ICIS. ICIS risk of failure is not the same as risk of levee failure only, as stretches of Sherman Island levees are in fact not just elements in the levee system there but also elements in other critical infrastructures. With respect to Sherman Island, there is the levee stretch with Hwy 160 on top; there are also other stretches serving as the waterside banks of the deepwater shipping channels; another stretch serves to protect a large wetland berm (as fishing and bird habitat). If those stretches of levee fail, so too by definition do elements fail in the deepwater shipping channel, Hwy 160 or the Delta’s threatened habitat.

Criterion 3 asks: What is the effect on the road system or shipping system or wetlands ecosystem, when that shared ICIS element on Sherman Island fails? If a stretch of Hwy 160 fails, road traffic in the Delta would have been detoured; if a stretch of the deepwater shipping channel fails, shipping traffic would have been rerouted to other ports; and so on. In some cases the service cannot continue because there is no default options, e.g., the Sherman Island wetlands berm in terms of its habitat and fish can’t be “rerouted” were protective levee to fail.

What infrastructure system that shares one or more ICIS elements on Sherman Island would be affected the most in terms of increasing the probability of its failing as a system, were such Sherman Island levee stretches to fail? The answer: A levee breach anywhere on Sherman Island would increase the probability of the closing the key pumps for the State Water Project. That is, the Pf of the state and federal water projects would increase were Sherman Island to flood, because saltwater would be pulled further up from the San Francisco Bay into the freshwater Delta.

–In sum, the three with-respect-to risk assessment criteria—others are possible—differ appreciably as to where risk analysts focus attention in terms of levee fragility: the weakest stretch (Pf) may not be the same stretch whose failure would have the greatest loss of life and property (Cf), while any stretch that failed would pose the greatest ICIS risk (namely, the probability that an ICIS element failing increases the probability of failure of one or more of the constituent systems sharing that element).

You would expect that calls for more and more “inter-organizational coordination” would have to be prioritized in light of these criteria distinctions. You’d be wrong. Criterion 3 was altogether outside conventional remit for risk assessment and management up to and at the time of the research.

Broader methodological implications of risk and uncertainty with-respect-to scenarios. Before proceeding to new metrics based in benchmarks for such risks, uncertainties and criteria, it is important to tease out what we mean and imply by “with respect to” in more general methodological terms:

  1. If you define risk of failure as the product of the probability of failure (Pf) times the consequences of failure (Cf), then Pf and Cf are NOT independent of each other, as conventional risk analysis would have it.

Both are connected indirectly by the “intervening variable” of their failure scenario. It’s Pf and Cf with-respect-to the same failure scenario. It’s the failure scenario which details the operative: reliability standard (are you seeking to preclude specific events or avoid them if possible; are some events inevitable or compensable after the fact); evaluative criteria (are you managing Pf [probability] or both Pf and Cf (risk); and (3) the system being managed (are you managing, e.g., the within or across different infrastructures).

Accordingly, the more granular the failure scenario (the greater the details about the above), the more likely that Pfs and Cfs are directly interconnected. In the most obvious case of interinfrastructural cascades, one consequences of infrastructure1 failing (Cf1) may be to increase infrastructure2’s probability of failure (Pf2).

This is why a risk estimate must never be confused with being a prediction, i.e., “if the risk is left unattended, failure is a matter of time.” Even were Pf and Cf not interconnected, the efficacy of prediction depends on how detailed the with-respect-to scenario is. The function of the failure scenario is to identify and detail (if not isolate) conditions for cause and effect upon which prediction is or is not made possible. You will note that many “large system failure scenarios,” a.k.a. crisis narratives, are devoid of much detail when it comes to the operative reliability standards, evaluative criteria and (sub)systems to be managed.

  1. Identifying risk(s) in the absence of first defining the operational system and the reliability standard(s) being managed to ends up with having no stopping rule for possible failure scenarios and types of risks/uncertainties.

Without defining these initial conditions, all manner of elements and factors end up posing risks and uncertainties, e.g.

…different assets; multiple lines of business; system capacity, controls and marketing factors; in terms of the risks’ time-dependence versus independence; in terms of the risks associated with emergency work as distinct from planned work; investment risks versus operational ones; risks with respect not only to system safety and reliability, but also organizationally in terms of financial risk and in terms of risks of regulatory non-compliance….ad infinitum

At some point, it becomes open question how managing all these and more risks and uncertainties contributes to the infrastructure’s control room operating the system reliably in real time. Conventional root cause analysis of infrastructure failure becomes highly vexed in the absence of a specified failure scenario. (For that matter, would you talk about the wetness of water by only analyzing “H2O” in terms of its oxygen and hydrogen molecules?)

In fact, the lack of a stopping rule for failure scenarios to be worried about represents a hazard or is its own failure scenario, when it discourages (further) thinking through and acting on failure scenarios about which more is already known and can be managed. When we asked infrastructure interviewees what were the “nightmares that keep them awake at night,” they identified not only measurable risks along with nonmeasurable uncertainties with respect to specific failure scenarios but also the fact that these scenarios seemed part of a limitless set of possibilities for what could go dangerously wrong.

What do these considerations add up to for the purposes of identifying new, more appropriate benchmark metrics for large system risk and uncertainty?

Most obviously, the probabilities and consequences (Pf and Cf) of large system failure can be underestimated. But this is not only because: (1) the measured estimates of Pf do not adequately address important nonmeasurable uncertainties (i.e., where either Pf or Cf cannot be measured in the time required) and (2) there are so many more failure modes than the conventional scenarios (e.g., earthquake or flood) assume.

It is also because—and importantly so as we just saw–the  failure scenarios themselves have not been specific enough with respect to the boundaries of the system being managed and the reliability standard(s) that govern what is taken to be relevant risk and uncertainty.

Second, the infrastructure’s already-existing risk mitigation programs and controls become a priority source of indicators and metrics reflecting how seriously catastrophic failure scenarios are treated by infrastructure managers. The existing controls and mitigations may provide the only real evidence, outside the real-time management of the infrastructure control room (if present), of what currently works well with respect to improving system reliability and safety when pegged to catastrophic system failure.

To put it another way, the fact that risk is not calculated through formal risk analysis and management protocols must not be taken to mean risk is not formally appraised and evaluated by other means, most prominently (1) through the skills in systemwide pattern recognition and localized scenario formulation of real-time control room operators and (2) relevant evaluation of risk mitigation programs and existing risk controls.

Against this background and in comparison to conventional risk analysis today, at least three new benchmark metrics for major risk and uncertainty can be identified by virtue of their different with-respect-to failure scenarios.

I. New risk benchmark 

When control operators and their managers in large critical infrastructures know that some events must never happen—the nuclear reactor must not lose containment, the urban supply must not be contaminated by cryptosporidium, the electricity grid must no separate and island—and we know that they know because they behave accordingly—then better practices emerge for ensuring just that. (Again, this is why we look to evaluating existing mitigation programs and controls, and not just in the infrastructure concerned but in like infrastructures.)

Mandates to reliably preclude certain events put enormous pressure to focus on and adapt practices that are actually working to meet the mandates (including the appropriate evaluative criteria for measuring how effectively have the mandates been met). Where better practices have emerged, you know that others too face political, economic and social constraints and nonetheless have jumped a bar higher than we yourselves are currently facing under the very similar constraints, including evaluative criteria and reliability standards.

Where so, then conventional risk analysis gets its questions only half right by stopping short of the other questions to be asked beforehand. The conventional questions, “What could go wrong?” “How likely is that?” and “What are the consequences if that were to happen?” should be preceded by: “What’s working?” “What’s even better?” “How can we get there?” and only then do we ask: “What could go wrong in trying to get there?” “How likely is that?” and “What are the consequences if that were to happen?

II. New metric for ranking crisis scenarios 

Start with a rather well-known prediction of Martin Rees, British science advisor, who assigned no better than a 50/50 chance that humanity survives the current century because of catastrophes of our making. How might we evaluate and rank his prediction in terms of risk and uncertainty?

Turn to another famous prediction, that of U.S. President, Woodrow Wilson (in his time expert in several fields), who predicted in September 1919 with “absolute certainty” that there would be another world war if the US did not join the League of Nations. Assume a unit of measurement called the Wilson. It is equal to the confidence today’s experts have that Woodrow Wilson did foresee the start of World War II.

Obviously, “the start of World War II” is inexact. Wilson did not predict the rise of Hitler, the Shoah, or carnage on the Eastern Front. But crisis scenarios for financial cascades, global cyber-attacks, and fast-spreading pandemics of as-yet unknown viruses lack comparable specificity by way of risk and uncertainty.

The question is this: How confident are experts in their crisis scenarios when that confidence is measured out in Wilsons? When it comes to nuclear terrorism, are the experts, say, 30 times more confident that such terrorism will happen than they are that Woodrow Wilson foresaw World War II? For that matter, what would be the consensus view of specialists when it comes to denominating other disaster scenarios into fractions or multiples of Wilsons?

The temptation is to dismiss outright that Woodrow Wilson foresaw the future. Were that dismissal scientific consensus, however, it would be quite significant for our purposes: Here at least is one scenario that is just-not-possible-at-all. Nor risk or uncertainty of being wrong here! To render any such conclusion means, however, the criteria used for concluding so apply to other crisis scenarios.

In short, we’re back to baseline confidence measures and the dray work of developing multiple ways of triangulating on and estimating specialist confidence, scenario by scenario, in the face of difficulties and inexperience over what and about which we know and do not know.

Several key points, though, become clearer at this point. To ask how confident specialists are about nuclear terrorism specifically quickly becomes just what is meant by “an act of nuclear terrorism.” What, indeed, are the pertinent with-respect-to scenarios?

This devil-in-the-details leads to a second half of our thought experiment. Assume now we face a specific crisis scenario. It could be that act of nuclear terrorism, or that computer glitch sending global markets into free-fall or that bioengineered pathogen destroying near and far.

Assume a visualization of the widening scenario is simulated and presented so as to pressure decisionmakers to prevent that scenario from happening, once they see how catastrophe unfolds and metastasizes.

Assume also a running tally in the visualization shows the estimated monetary amount of the disaster’s costs—lives, property, whatever—burgeoning into the millions, then billions, now trillions. The tally in quick order reinforces how imperative it is to take urgent preventive action in the midst of all this interconnectivity (evaluative criterion #3 above).

But hold on. Assume the visualization and tally remain the same, but the simulation’s goal now is to estimate the cost of a catastrophe that can’t or won’t be prevented. The tally then becomes an unofficial price tag of the emergency prevention and management system put into place after this disaster, so that a like calamity “will never happen again” (the precluded event standard of reliability above). The commonplace here is that, sadly, it takes a disaster to bring about far better and more comprehensive disaster prevention and management afterward.

The temptation with this part of the thought experiment is to assert that, absent outright prevention, a world won’t be left from which to mount an effective crisis management infrastructure later on. That, though, surely depends on the specific catastrophe and the extenuations of implementing an emergency response infrastructure that its losses trigger. Again: The devil is in the details of the with-respect-to scenarios.

Note, though, just how difficult it is for anyone, subject matter experts let alone others, to come up with plausible details about the crisis response structure to be in place after the losses incurred. To do that requires deep knowledge and realism—far more, in other words, than the much-touted “imagination” on its own.

To short, we are asked to treat possible crisis scenarios seriously until proven otherwise, when those offering the scenarios are unable to specify what it takes to disprove the scenarios or prevent their recurrence. Or to put the point more positively, what deserves ranking, and where it is possible, are those crises of sufficient detail to be triangulated upon and confirmed.

III. New metric for estimating societal risk acceptance 

It is generally understood that “acceptable-risk” standards, based on past failure frequencies and commitments of “never again,” can be fleeting and ephemeral. More, the retrospective orientation to letting past (in)frequency of failures set the standard has led to complacency and the very accident to be forestalled, as in: “Well, it hasn’t happened in the past, so what’s the problem now…”

It’s worth asking, what can be offered by way of a prospective orientation—“we are no more reliable than the next failure ahead”—to identifying standards of acceptable/unacceptable societal risk. What does “societal risk acceptance” look like if instead of being based on past frequencies, it is grounded in the expectation that all manner of major system accidents and failure lie in wait unless actively managed against?

I suggest the following thought experiment, the aim of which identifies a proxy for “acceptable societal risk.” To telegraph ahead, the proxy proposed is the aggregate curve of the major real-time control room risks of society’s key critical infrastructures.

–Assume: that society has identified critical infrastructures indispensable to its survival; that the key infrastructures have central control rooms for operating the entire systems; and that the respective control room operators have a set of chief risks that they must manage in order to maintain systemwide reliability, at least in real time. (Here high reliability is defined as the safe and continuous provision of the critical service, even during periods of high risk and uncertainty.)

While huge assumptions, their virtue is trying to operationalize the far less detailed premise of current approaches—most notably ALARP (“as low as reasonably practicable”)—that somehow “society sets acceptable and unacceptable risks,” leaving the somehow utterly without specifics.

Under the precluded-event standard of reliability (i.e., the event or a set of conditions to be prevented must never happen, given the society-wide dread associated with system failure), our research found that control operators need to be able to maneuver across four performance modes so as to maintain normal operations. Each performance mode was found to have its own chief risk.

The four modes range from anticipatory exploration of options (just in case) when operations are routine and many management strategies and options are available, to a real-time improvisation of options and strategies (just in time) when task conditions are more volatile. Control room professionals and their support staff may have to operate temporarily in a high-risk mode (just for now) when system volatility is high and options few. They may also be able, in emergencies when options have dwindled, to impose onto their service users a single emergency action scenario (just this way) in order to stabilize the situation.

The chief risk in just-in-case performance is that professionals are not paying attention and become complacent—reliability professionals have let their guard down and ceased to be vigilant, e.g., to sudden changes in system volatility (think of system volatility as the degree to which the task environment is unpredictable and/or uncontrollable). As for just-in-time performance, the risk is misjudgment by the operators with so many balls in the air to think about at one time. The great risk in just-this-way performance is that not everyone who must comply does so.

Last, just-for-now performance is the most unstable performance mode of the four and the one managers want most to avoid or exit as soon as they can. Here the risk of “just keep doing that right now!” is tunneling into a course of action without escape options. What you feel compelled to do now may well increase the risks in the next step or steps ahead (in effect, options and volatility are no longer independent).

Note that the commonplace admonitions for being reliable—don’t get complacent; avoid overconfidence; once you’ve backed yourself into a corner, quick fixes work only just for now, if that; and don’t expect everyone to comply with command and control—all recognize these chief performance mode risks on time-critical jobs.

–Step back now and further assume that estimates have been computed by control room operators in consultation with subject matter experts for the risks of complacency, misjudgment, non-compliance and closing off alternatives, within the infrastructure concerned. Such is then done for (a stratified sample of) society’s key infrastructures with control rooms.

There is no reason to believe the estimates of any one of the four key risks are the same for the same performance mode across all infrastructures during their respective normal operations. Different precluded events standards are operationalized very differently in terms of the thresholds under which they are not to operate. Complacency or misjudgment could empirically be more a problem in some control rooms than others.

Assume the performance-mode risk estimates (e.g., a stratified/weighted sample of them) have been rank ordered, highest to lowest, for these infrastructures operating to a precluded-event standard by their respective control rooms. A plot of points measured in terms of their respective Pf and Cf coordinates is generated in the form of a downward sloping function (e.g., logarithmic or regression). This function reflects the revealed allocation of acceptable societal risks at the time of calculation for the critical infrastructure services of interest in really-existing normal operations to preclude their respective dreadful events from happening.

The downward sloping function would, by definition, be a prospectively oriented standard of acceptable risk for society’s (sampled) critical infrastructures operating to the precluded-event standard by their control rooms. It is prospective because the unit of analysis isn’t the risk of system failure—again, typically calculated retrospectively on the basis of the past record, if any—but rather the current risks of real-time control operators failing in systemwide management, now and in their next operational steps ahead. Note the two-dimensionality of the prospective “next steps ahead”: It refers not only to the future ahead but also the future that has to be made for the present.

–Even though all this is difficult to detail, let alone operationalize—but less so than the conventional ALARP!—three implications are immediate.

First, because control rooms manage latent risks (uncertainties with respect to probabilities or consequences of system failure) as well as manifest risks (with known Pf and Cf), any such downward-sloping function will necessarily have a bandwidth around it. That bandwidth, however, is not one that can be chalked up to “differences in societal values and politics.” Rather the bandwidths reflect more so the control room uncertainties (often technical and procedural, but related also to unstudied or unstudiable conditions).

It is true that some real-time uncertainties to be managed are linked directly to societal values and politics—think here of those new or revised compliance regulations that followed from the last disaster—have their greatest real-time impacts. Even then, the challenge is to show how the application at this time and for this case of any compliance procedure follows from said societal values. That is no easy task because analysis would also drive down to the case or event level and not just up to the policy or regulatory level where societal values are (or so it is said) easier to identify.

A related implication is also noteworthy. The bandwidth around a societal risk acceptance function as defined above varies because not every critical infrastructure manages to a precluded-event standard. Other standards (and associated evaluative criteria) can be managed to. Even so, note how remote this acknowledgement is from any argument that societal values determine directly (or even primarily) the operative standards managed to.

An example is helpful. A primary reason why critical infrastructures manage to an avoided-events standard today—these events should be avoided, albeit they cannot always be in practice—is because their inter-infrastructural connectivity does not allow individual control rooms to preclude failures or disruptions in the other infrastructures upon which they depend or which depend on them. It is better to say that in these interconnected cases the shift from one (precluded-event) to another (avoided-event) reliability standard reveals societal preferences for interconnected critical infrastructures before it demonstrates any first-order derivation from more generalized or abstracted “societal values” per se.

Third, a very practical implication follows. It is likely that that policy and regulatory leaders who do not understand the uniquely prospective orientation of reliability professionals are apt not only to confuse their own values and views about the future for those of control room reliability professionals, but that they—the policymakers and regulators—will make mistakes because they don’t appreciate the distinctive orientation of these professionals as well.[1]

A last point when it comes to major risk and uncertainty in policy and management. In case it needs saying, the risk and uncertainty discussed above–so too the standards, evaluative criteria, and “systems”–are socially constructed and historicized. Their expression is very much of a time and of a place.

That said, acknowledging the historical, social, cultural, economic…basis of our knowledge about the complex we have been summarizing as “risk and uncertainty” has rarely gone far enough when it comes to policy and management discussed above.

For, there is the corollary of social construction and historicism: Humans can only know—really know—that which they create. (Such is the insight of St. Augustine for philosophy, Giambattista Vico for history, Roy Bhaskar for science….) Humans know mathematics in a way they cannot know the universe, because the former is a thoroughly human creation about which more and more can be made to know. Their uncertainties are socially constructed in a way that, for lack of a better word, “unknowledge” about the universe is not.

This corollary means that to accept that “Risk, uncertainty and allied notions are socially constructed concepts easily historicized” needs to be pushed further.

What is missing are the details and specifics of the connections among risk, uncertainty and associated terms that we make and the meanings we draw out for these connections, often under conditions of surprise.

Our creations are always surprising us and we seek to explain these occurrences by means of analogies that extend the range of what we call knowledge. That which we have created by way of risk and uncertainty—and continue to create—has become very complex. In fact: so complex as to continually provoke more complexity-as-knowledge and with it more action-as-complexity.


[1] What are specific direct relationships between political leaders and infrastructure control operators? At first pass, leaders would seem to be all about just-this-way command and control in emergencies  But we know of infrastructure’s reliability professionals who determine emergency declarations, as they are the best informed in real time, not political outsiders. Indeed, a big issue is ensuring “politics stays out of the control room” as much as possible. We found leaders to be important in the negative liberty sense of staying out of the way of control room operators working under just-in-time and just-for-now performance demands. As for just-in-case performance during times of low system volatility, leaders lead best by ensuring reliability professionals are able to build up their inventory of resources to be used in a crisis. In short, reliability professionals have more performance modes than leaders realize, we believe. Worse, what is a “crisis” to control operators is not necessarily known to or regarded by those political leaders whose policies reduce operator options, increase their task volatility, and reduce their maneuverability to prolonged just-for-now performance only, among other real-time inflictions.

Principal sources. This blog entry consolidates, edits and updates earlier blogs: “A new standard for societal risk acceptance,” “Easily-missed points on risks with respect to failure scenarios and their major implications,” “Risk criteria with respect to asset versus system scenarios,” “Half-way risk,” “With respect to what?,” and “Yes, ‘risk and uncertainty’ are socially constructed and historicized. Now what? The missing corollary and 3 examples”

Playing it safe, utopia

The expression, “playing it safe,” is often used pejoratively in the US. Safety-first, Teddy Roosevelt said, will kill America. Political scientists David Edelstein and Ronald Kreb counseled a greater pragmatism in U.S. foreign policy in like manner:

…pragmatism calls for a more experimental approach to foreign policy. Creativity emerges only from an organizational and political environment that eschews rigid strategy and tolerates failure. Successful organizations adapt fluidly to changing circumstances, create cultures that permit experimentation, and learn from their errors. The first rule of foreign policy should remain “Do no harm,” but much international harm can come from playing it safe. The United States must cultivate a bureaucratic and political climate that is forgiving of small failures. Only in that atmosphere can the country’s foreign-policy makers go after the big wins—and leave strategizing behind. (my bolding; accessed online on September 19 2010 at https://www.foreignaffairs.com/articles/2015-10-20/delusions-grand-strategy)

Our research on society’s critical infrastructures, however, suggests there should be no-go areas in policy, including foreign policy, precisely because what constitutes a “small failure” and “forgiving” are not possible to predetermine or define in a foreign affairs world of high unpredictability and complexity.

–Pause and consider what harm is created by “playing it safe” and under what conditions. “Innovate or evaporate” sums up one set of conditions. If we don’t change, we die, as in Lampedusa’s “If we want that everything remains the way it is, everything must change.” There is no alternative, Mrs Thatcher repeated.

There is also the problem with that word, “playing.” It’s as if people are not serious because they are just playing around, when they could/should go further. It’s as if you’re standing still by playing safe in the same way that “maintaining the status quo” has often been (wrongly) equated to a do-nothing option. Here playing it safe borders on indifference, no longer mustering anything more than going through the motions. (And who wouldn’t seek the shelter of “playing it safe” after being continually harangued to “take control of foreign affairs”?)

Then, there is the sense in which “playing it safe” is impossible anyway. People continually deviate from the prescriptions of probability and expected utility. Or better yet: Prescriptions of probability and expected utility continually deviate from people’s actual behavior. Many people play-act safety, not because they are burned-out but for the opposite reason: They’re going to do otherwise, period.

–But whatever the case at hand, the more immediate question, empirically and logically, is: Under what conditions is not playing it safe the equivalent of running into danger?

One example will have go suffice. Say you are on one of the upper floors of a skyscraper, looking out on the morning. That is Reality 1: You are the observing subject looking out at reality. After a point, you realize that spot in the distance is actually a plane headed toward you, here in the World Trade Center. That is Reality 2: You become the object of reality, in that grip of the real, and no longer the observer.

There is, however, Reality 3. This is of the air traffic controllers during 9/11. Neither the observer of the first reality nor the object of second, these professionals achieved the unprecedented without incident that day. They were instructed to land all commercial and general aviation aircraft in the United States—some 4,500 aircraft—and did so. They played it safe.

Without overdrawing the point, so too do we demand our professionals land those water, electricity, transportation, telecommunications, and many more critical services every day without major incident. If that is also playing it safe in high-risk situations, I’ll take it any day.

–If so, then Reality 3 is the domain of really-existing utopia. I’ve had little good to say about macro-designers and their utopian visions in this blog. But my critique of macro-design is not a critique of utopianism.

In fact, the search for utopia and the search for the reliability I’ve been talking about have been tied together for a very long time. Sociologist, Zygmunt Bauman, wrote: “To put it in a nutshell, we dream of a reliable world, one we can trust. A secure world. ‘Utopia’ is the name which, courtesy of Sir Thomas More, has commonly been given to such dreams…”

These are the utopians I’ve been describing. They are the ones in the operational middle who see something new in translating the patterns they see and the scenarios they face, something critical that was not there before nor could be there without that real-time translation and improvisation. This is the utopianism of Reality 3; this is the only utopianism we now practice. Pity, we may be losing these professionals before most of us even knew they were there.