A different take on “traditional agriculture”: risk-averse v. reliability-seeking

–A risk-averse farmer keeps multiple varieties of crops, livestock and/or sites so that, if one fails, s/he has others to fall back on. The more crops, livestock and sites a farmer can muster and maintain, the greater the chances s/he won’t lose everything. Where possible, the risk-averse farmer avoids hazards whose probabilities and uncertainties cannot be managed so as to maintain a survival mix of crops, livestock and productive sites. The risk-averse farmer faces a carrying capacity that sets exogenous limits on the total crops and livestock produced.

–A reliability-seeking farmer keeps multiple varieties of crops, livestock and/or sites because any single resource—e.g., the soil that sustains the crop, site and livestock—is managed better if it provides multiple services. The more crops, livestock and sites a farmer can muster and maintain, the greater the chances s/he can meet peak demands made on his or her production system. The reliability-seeking farmer seeks to manage the probabilities and uncertainties of hazards that cannot be avoided so as to maintain a peak mix of crops, livestock and sites. The reliability-seeking farmer faces a carrying capacity whose endogenous limits are set by farmer skills for and experience with different operating scales and production phases.

–Farming behavior, no matter if labelled “traditional” or “subsistence,” that is developed around high technical competence and highly complex activities, requires high levels of sustained performance, oversight and flexibility, is continually in search of improvement, maintains great pressures, incentives and expectations for continuous production, and is predicated on maintaining peak (not minimum) livestock numbers in a highly reliable fashion without threatening the very limits of system survival is scarcely what one would call “risk-averse.”

Case-by-case analysis: realism, criteria, virtues

Realism

–I remember reading there were eight principal approaches to government-as-economic-policymaker—-

  •             1.         Government as an exogenous black box
  •             2.         Government as a neutral extension or aggregation of private choice
  •             3.         Government as a nonneutral decision-making or preference-aggregating                         process
  •             4.         Government as an instrument of the powerful
  •             5.         Government as an instrument with which to check the power of the powerful
  •          6. Government as the source of problems, if not of evil, in society
  •             7.         Government as the source of progress
  •             8.         Government as part of the necessary framework of the market.

—-and thinking, “Really, there are four approaches only. . .”

In semiotic terms—a thing is defined by what it is not—there’s one approach centering around the dimension of neutrality and its opposite (2 & 3), another around that of the instrumentation of power and its counter (4 & 5), and the third around retrogression and progress (6 & 7). Numbers 1 and 8, while not mirror images, represent a sharp contrast between the known market and the unknown black box.

This semiotic reading underscores that approaches to government economic policymaking need not have been determined deliberately by policymakers. Structurally, to have one approach entails its opposite, if simply by stating what the former is not.

–Similar semiotics are found at work in the recommended steps to undertaking a policy analysis: define the problem, assemble the evidence, identify the alternatives, agree on the methods and criteria to evaluate the alternatives, evaluate the alternatives in light of their projected consequences, decide on your recommendation, and then communicate it in a way that is understood and actionable.

The steps are thus also coupled—problems and solutions like alternatives and consequences are reciprocally defined. But what is of interest here is how a step-wise approach entails its own frustration. Not only does taking a step imply that the step can be frustrated, but taking that step might frustrate subsequent steps. 

–It’s this marked opposition—e.g., define/not define, and so on for other steps—that gives the stepwise approach its realism.      

The more steps and time in addressing the issues, the more realism conveyed by that addressing. The more time the steps take, the more difficult it is, the more real it all becomes. Too many activities also means reducing the conditional probability that any of this activity will reach a conclusion.  In short, the more real these issues are the more these issues don’t have a chance of being real in any other way. It becomes its own case, to be evaluated in its own right.[1]

Criteria

The traditional ex ante (before the event) criteria for a policy proposal are economic efficiency or cost-effectiveness, political and administrative feasibility, equity and legality, among others. When it comes to ex post (after the event) evaluation of actual policy performance, the criteria narrow down to variations of: Did policy implementation match the mandated goals of the policy? Complex policy and management issues are, moreover, also complex because we must expect that the goals will change, even during implementation.

Or to put that point differently, there are always different ex post criteria to evaluate any complex policy, five important ones being:

  • in terms of whether its implementation achieved its stated objectives;
  • against some ideal, which the policy’s objectives may or may not match;
  • against the implementation record of like policies;
  • in terms of what would have happened had not the policy been in effect (the “counterfactual”); and
  • in terms of whether savings could have realized if the policy had been more cost-effectively undertaken.

The very process of identifying which of these is or are the most important is often what is meant by “determining the weight of the evidence” or “deciding the case on its own merits.” Such is why complex policy analysis is a form of case-by-case analysis.

Virtues

–An earlier blog sketched some of the practical advantages of the case approach. Let me now provide more flesh and bone (in no order of priority):

  • You get to see and show how theory-based taxonomies and conceptual frameworks go only so far but rarely far enough when applied in the field;
  • You get to see not just politics but power at work—which is key for those of us loathe to talk abstractly about a bracketed-off [Power];
  • You get to see that policy and management are more complicated than politics, dollars and jerks. You get to see how easy it is to confuse the noise and mess as “intentions” of those involved. Other explanations—bureaucrats were mindlessly following rules—also turn out to be more complicated on closer examination;
  • The case level helps you differentiate and see some things afresh, as if for the first time, and about as close to “objectivity” as you get;
  • If your case study is across time, you’ll observe not just “normal” periods but also disruptions and more. (Here your career, and not just a specific task or job, is “the case.”) When it comes to policy and management, you’ll be able to map out the different positions and standpoints taken with respect to the efficacy of macro-design, micro-operations, system-wide pattern recognition and anticipation, and more localized contingency scenarios; and
  • It will come as no surprise that actual practice, actual behavior and actual implementation in the individual case study differ from the principles, policies and plans said to govern the former. This is so unexceptional that when things work as planned this must be a surprise worthy of its own research and study.

–If you were to summarize the six bullets, focusing at the case level brings a frankness and seriousness to your analysis and advice—what the ancients highly prized as parrhesia or frank counsel—about complications of power, context, implementation, and macro-design.

Things are more complex than we thought and because they are complex, they can be recast and seen in a new light. This is, I believe, the great virtue of the case approach: Seeing something extremely difficult in a fresh and productive light.

An example. I attended a presentation on an ecosystem restoration project in Montana. One of the project leaders described what was for him the key contribution of ecosystem management: He could now see how his forest acres in the valley fit into the wider landscape. The approach gave him a way to integrate the small and large scale, with cross-scale implications both ways.

In effect, his management perspective recoupled his acres to the landscape across scale and across time. Not only could the project leader stand in his woodlot and see how it fit in with the larger scale of the ecosystem and landscape, he was able to plan at the smaller scale for the longer term. He was able to expect a future for the forest. So too for the ecosystem manager standing at the ridge overlooking the valley, as she looks down and is able to plan at the larger scale for the shorter term. She can now see what the next steps ahead are when it comes to managing the entire ecosystem.

“Think globally, act locally” is recast as, “Think long term from the small scale, act real time from the large scale.” In this way, recoupling at the case level ends up recasting what had hitherto been an obviously coupled world, but shorn of these crucial details and specifics.

[1] An example: Richard Burton, in his well-regarded The Anatomy of Melancholy, first published in 1621, writes,

If it were possible, I would have such priest as should imitate Christ, charitable lawyers should love their neighbours as themselves, temperate and modest physicians, politicians contemn the world, philosophers should know themselves, noblemen live honestly, tradesmen leave lying and cozening, magistrates corruption, &c., but this is impossible, I must get such as I may.

The realism of that “impossible” emerges through the long list of wants that cannot be had, which when you re-read the passage is less a case about reality than about, “such as I may have,” Burton’s own life-long depression.

Lifecycle modeling of species

–Say you are involved in modeling the lifecycle of a listed species. You and your colleagues rightly start out ambitious by aiming to develop and then integrate sub-models for species reproduction, period-to-period, region-based species survival, movements between regions, and juvenile/adult mortality due to exogenous factors, such as human-made disaster.

It doesn’t take long to confirm what you and your colleagues suspected anyway that not only do pertinent data not exist, but modeling uncertainties and errors work against integrating current sub-models into a comprehensive lifecycle model (LCM).

Thereafter with time and funding, you and your colleagues develop much reduced versions, called LCM1, LCM2 and now LCM3, each bringing to light further refinements and significant methodological and data issues. You embark on developing LCM4 in the hopes that the research team—again funding permitting—are moving closer to identifying management interventions for the species.

The many technical reports (now approaching 50 in number) produced during the decade of research track the refinements, improvements, insights and difficulties in modeling species reproduction, movement and survival rates. The peer-reviewed literature on lifecycle models has been advanced in the view of many outside experts by this research.

–Unfortunately for a variety of reasons, none of the reports identify modeling and data uncertainties in a way that they can be contrasted to the uncertainties and errors made in the existing comprehensive model for managing said species.

What “comprehensive model,” you ask? Didn’t I say there was no comprehensive lifecycle model? I forget to tell you that, during all the years the modeling research, real-time deliberations of interagency staff and scientists continued with really-existing decisions, period-by-period, over the management of said species.

From time to time the consequences of the management actions find their way into a technical report, but even here modeling uncertainties hold center-stage: “Though it is tempting to interpret declines in estimated [mortality] as evidence of management success, models of population dynamics are required to disentangle. . .”

–You’d think that the burden of proof has been on the modelers to demonstrate that reliance on life-cycle models would lead to better results compared to the next best alternative of current interagency deliberations of scientists and support staff. . .

But, not to worry: The judge who mandated the research in the first place asserted way back when: “All experts agree that application of a lifecycle model is the accepted method for evaluating the effects of an action upon a populations growth rate.”

This means all we need do is assume management isn’t improving faster than the modeling. And what could make more sense in reality than doing what is so needed in theory?

New benchmark metrics for major risk and uncertainty (longer read)

Starting point for policy and management with respect to risk and uncertainty. The methodological demand is always: First differentiate! Differentiation matters especially with respect to risk and uncertainty. There is no such thing as risk or uncertainty on its own; it is always risk or uncertainty with respect to something.

The language of risk and uncertainty is now so naturalized it always seems the obvious point of departure, like filing alphabetically or chronologically: “The first thing we have to do is assess the risks of flooding here…” No. The first thing you do is detail the with-respect-to scenarios of interest.

To start with, you identify the boundaries of the flood system as it is actually managed and then the standards of reliability to which it is being managed (namely, events must be precluded or avoided by way of management) and from which derive the specific risks to be managed to meet standard(s). The risks follow from the standard to be met for the system as bounded for management in real time.

Why is this important? It means that benchmarks or metrics for risk and uncertainty are all about the details in the with-respect-to scenarios.

An example. Focus on an island in the western California Delta–for example, Sherman Island–and consider criteria that engineers rely on for establishing priorities with respect to reducing levee fragility there (the island’s encircling levees are needed because its productive areas are considerably below water level):

  • Criterion 1. Levee fragility priority can be set in terms of the weakest stretch of levee around the island, i.e., the stretch of levee that has the highest probability of failure (Pf). This has obvious implications for collocated elements from different infrastructures, e.g., a very high levee Pf should counsel against plans to place, say, a huge chemical tank facility next to it. (You’d assume commonsense would commend this as well.)
  • Criterion 2. Levee fragility priority can be set in terms of the stretch with the highest loss of life (and/or other assets) arising from levee failure. If the levee breaches where most island residents live, then there is less time for evacuation. Clearly, consequences of failure (Cf) are important here, and this criterion is about the levee stretch that has the greatest risk of failure, not just probability of failure. (Risk here is the product of Pf times Cf.)

Sherman Island’s weakest levee stretch, at the time of our research, was said to be on the southwest part of the island; the stretch with the greatest loss of life appeared to be on the eastern and south-east side with more residences. Other factors constant and from the perspective of Criterion 2, it is better in fact that the weakest stretch of levee (according to Criterion 1) is on the other side of the island, so as to ensure more time for evacuation.

–A third criterion, in contrast, reflects the extent to which the levee infrastructure of the island is part and parcel of a wider interconnected critical infrastructure system (ICIS):

  • Criterion 3. Levee fragility priority can be in terms of stretch that has the greatest risk to the entailed ICIS. ICIS risk of failure is not the same as risk of levee failure only, as stretches of Sherman Island levees are in fact not just elements in the levee system there but also elements in other critical infrastructures. With respect to Sherman Island, there is the levee stretch with Hwy 160 on top; there are also other stretches serving as the waterside banks of the deepwater shipping channels; another stretch serves to protect a large wetland berm (as fishing and bird habitat). If those stretches of levee fail, so too by definition do elements fail in the deepwater shipping channel, Hwy 160 or the Delta’s threatened habitat.

Criterion 3 asks: What is the effect on the road system or shipping system or wetlands ecosystem, when that shared ICIS element on Sherman Island fails? If a stretch of Hwy 160 fails, road traffic in the Delta would have been detoured; if a stretch of the deepwater shipping channel fails, shipping traffic would have been rerouted to other ports; and so on. In some cases the service cannot continue because there is no default options, e.g., the Sherman Island wetlands berm in terms of its habitat and fish can’t be “rerouted” were protective levee to fail.

What infrastructure system that shares one or more ICIS elements on Sherman Island would be affected the most in terms of increasing the probability of its failing as a system, were such Sherman Island levee stretches to fail? The answer: A levee breach anywhere on Sherman Island would increase the probability of the closing the key pumps for the State Water Project. That is, the Pf of the state and federal water projects would increase were Sherman Island to flood, because saltwater would be pulled further up from the San Francisco Bay into the freshwater Delta.

–In sum, the three with-respect-to risk assessment criteria—others are possible—differ appreciably as to where risk analysts focus attention in terms of levee fragility: the weakest stretch (Pf) may not be the same stretch whose failure would have the greatest loss of life and property (Cf), while any stretch that failed would pose the greatest ICIS risk (namely, the probability that an ICIS element failing increases the probability of failure of one or more of the constituent systems sharing that element).

You would expect that calls for more and more “inter-organizational coordination” would have to be prioritized in light of these criteria distinctions. You’d be wrong. Criterion 3 was altogether outside conventional remit for risk assessment and management up to and at the time of the research.

Broader methodological implications of risk and uncertainty with-respect-to scenarios. Before proceeding to new metrics based in benchmarks for such risks, uncertainties and criteria, it is important to tease out what we mean and imply by “with respect to” in more general methodological terms:

  1. If you define risk of failure as the product of the probability of failure (Pf) times the consequences of failure (Cf), then Pf and Cf are NOT independent of each other, as conventional risk analysis would have it.

Both are connected indirectly by the “intervening variable” of their failure scenario. It’s Pf and Cf with-respect-to the same failure scenario. It’s the failure scenario which details the operative: reliability standard (are you seeking to preclude specific events or avoid them if possible; are some events inevitable or compensable after the fact); evaluative criteria (are you managing Pf [probability] or both Pf and Cf (risk); and (3) the system being managed (are you managing, e.g., the within or across different infrastructures).

Accordingly, the more granular the failure scenario (the greater the details about the above), the more likely that Pfs and Cfs are directly interconnected. In the most obvious case of interinfrastructural cascades, one consequences of infrastructure1 failing (Cf1) may be to increase infrastructure2’s probability of failure (Pf2).

This is why a risk estimate must never be confused with being a prediction, i.e., “if the risk is left unattended, failure is a matter of time.” Even were Pf and Cf not interconnected, the efficacy of prediction depends on how detailed the with-respect-to scenario is. The function of the failure scenario is to identify and detail (if not isolate) conditions for cause and effect upon which prediction is or is not made possible. Without the scenario, you simply cannot assume more uncertainty means more risk; it may mean only more uncertainty over estimated risk in light of the with-respect-to scenario. You will note that many “large system failure scenarios,” a.k.a. crisis narratives, are devoid of just such detail when it comes to the operative reliability standards, evaluative criteria and (sub)systems to be managed.

  1. Identifying risk(s) in the absence of first defining the operational system and the reliability standard(s) being managed to ends up with having no stopping rule for possible failure scenarios and types of risks/uncertainties.

Without defining these initial conditions, all manner of elements and factors end up posing risks and uncertainties, e.g.

…different assets; multiple lines of business; system capacity, controls and marketing factors; in terms of the risks’ time-dependence versus independence; in terms of the risks associated with emergency work as distinct from planned work; investment risks versus operational ones; risks with respect not only to system safety and reliability, but also organizationally in terms of financial risk and in terms of risks of regulatory non-compliance….ad infinitum

At some point, it becomes open question how managing all these and more risks and uncertainties contributes to the infrastructure’s control room operating the system reliably in real time. Conventional root cause analysis of infrastructure failure becomes highly vexed in the absence of a specified failure scenario. (For that matter, would you talk about the wetness of water by analyzing “H2O” only in terms of its oxygen and hydrogen molecules?)

In fact, the lack of a stopping rule for failure scenarios to be worried about represents a hazard or is its own failure scenario, when it discourages (further) thinking through and acting on failure scenarios about which more is already known and can be managed. When we asked infrastructure interviewees what were the “nightmares that keep them awake at night,” they identified not only measurable risks along with nonmeasurable uncertainties with respect to specific failure scenarios but also the fact that these scenarios seemed part of a limitless set of possibilities for what could go dangerously wrong.

What do these considerations add up to for the purposes of identifying new, more appropriate benchmark metrics for large system risk and uncertainty?

Most obviously, the probabilities and consequences (Pf and Cf) of large system failure can be underestimated. But this is not only because: (1) the measured estimates of Pf do not adequately address important nonmeasurable uncertainties (i.e., where either Pf or Cf cannot be measured in the time required) and (2) there are so many more failure modes than the conventional scenarios (e.g., earthquake or flood) assume.

It is also because—and importantly so as we just saw–the  failure scenarios themselves have not been specific enough with respect to the boundaries of the system being managed and the reliability standard(s) that govern what is taken to be relevant risk and uncertainty.

Second, the infrastructure’s already-existing risk mitigation programs and controls become a priority source of indicators and metrics reflecting how seriously catastrophic failure scenarios are treated by infrastructure managers. The existing controls and mitigations may provide the only real evidence, outside the real-time management of the infrastructure control room (if present), of what currently works well with respect to improving system reliability and safety when pegged to catastrophic system failure.

To put it another way, the fact that risk is not calculated through formal risk analysis and management protocols must not be taken to mean risk is not formally appraised and evaluated by other means, most prominently (1) through the skills in systemwide pattern recognition and localized scenario formulation of real-time control room operators and (2) relevant evaluation of risk mitigation programs and existing risk controls.

Against this background and in comparison to conventional risk analysis today, at least three new benchmark metrics for major risk and uncertainty can be identified by virtue of their different with-respect-to failure scenarios.

I. New risk benchmark 

When control operators and their managers in large critical infrastructures know that some events must never happen—the nuclear reactor must not lose containment, the urban supply must not be contaminated by cryptosporidium, the electricity grid must no separate and island—and we know that they know because they behave accordingly—then better practices emerge for ensuring just that. (Again, this is why we look to evaluating existing mitigation programs and controls, and not just in the infrastructure concerned but in like infrastructures.)

Mandates to reliably preclude certain events put enormous pressure to focus on and adapt practices that are actually working to meet the mandates (including the appropriate evaluative criteria for measuring how effectively have the mandates been met). Where better practices have emerged, you know that others too face political, economic and social constraints and nonetheless have jumped a bar higher than we yourselves are currently facing under the very similar constraints, including evaluative criteria and reliability standards.

Where so, then conventional risk analysis gets its questions only half right by stopping short of the other questions to be asked beforehand. The conventional questions, “What could go wrong?” “How likely is that?” and “What are the consequences if that were to happen?” should be preceded by: “What’s working?” “What’s even better?” “How can we get there?” and only then do we ask: “What could go wrong in trying to get there?” “How likely is that?” and “What are the consequences if that were to happen?

(BTW, which would you prefer to start with in highly uncertain conditions: conventional risk analysis or high reliability management? The Maginot Line or the electricity grid enabling you to read this question?)

II. New metric for ranking crisis scenarios 

Start with a rather well-known prediction of Martin Rees, British science advisor, who assigned no better than a 50/50 chance that humanity survives the current century because of catastrophes of our making. How might we evaluate and rank his prediction in terms of risk and uncertainty?

Turn to another famous prediction, that of U.S. President, Woodrow Wilson (in his time expert in several fields), who predicted in September 1919 with “absolute certainty” that there would be another world war if the US did not join the League of Nations. Assume a unit of measurement called the Wilson. It is equal to the confidence today’s experts have that Woodrow Wilson did foresee the start of World War II.

Obviously, “the start of World War II” is inexact. Wilson did not predict the rise of Hitler, the Shoah, or carnage on the Eastern Front. But crisis scenarios for financial cascades, global cyber-attacks, and fast-spreading pandemics of as-yet unknown viruses lack comparable specificity by way of risk and uncertainty.

The question is this: How confident are experts in their crisis scenarios when that confidence is measured out in Wilsons? When it comes to nuclear terrorism, are the experts, say, 30 times more confident that such terrorism will happen than they are that Woodrow Wilson foresaw World War II? For that matter, what would be the consensus view of specialists when it comes to denominating other disaster scenarios into fractions or multiples of Wilsons?

The temptation is to dismiss outright that Woodrow Wilson foresaw the future. Were that dismissal scientific consensus, however, it would be quite significant for our purposes: Here at least is one scenario that is just-not-possible-at-all. Nor risk or uncertainty of being wrong here! To render any such conclusion means, however, the criteria used for concluding so apply to other crisis scenarios.

In short, we’re back to baseline confidence measures and the dray work of developing multiple ways of triangulating on and estimating specialist confidence, scenario by scenario, in the face of difficulties and inexperience over what and about which we know and do not know.

Several key points, though, become clearer at this point. To ask how confident specialists are about nuclear terrorism specifically quickly becomes just what is meant by “an act of nuclear terrorism.” What, indeed, are the pertinent with-respect-to scenarios?

This devil-in-the-details leads to a second half of our thought experiment. Assume now we face a specific crisis scenario. It could be that act of nuclear terrorism, or that computer glitch sending global markets into free-fall or that bioengineered pathogen destroying near and far.

Assume a visualization of the widening scenario is simulated and presented so as to pressure decisionmakers to prevent that scenario from happening, once they see how catastrophe unfolds and metastasizes.

Assume also a running tally in the visualization shows the estimated monetary amount of the disaster’s costs—lives, property, whatever—burgeoning into the millions, then billions, now trillions. The tally in quick order reinforces how imperative it is to take urgent preventive action in the midst of all this interconnectivity (evaluative criterion #3 above).

But hold on. Assume the visualization and tally remain the same, but the simulation’s goal now is to estimate the cost of a catastrophe that can’t or won’t be prevented. The tally then becomes an unofficial price tag of the emergency prevention and management system put into place after this disaster, so that a like calamity “will never happen again” (the precluded event standard of reliability above). The commonplace here is that, sadly, it takes a disaster to bring about far better and more comprehensive disaster prevention and management afterward.

The temptation with this part of the thought experiment is to assert that, absent outright prevention, a world won’t be left from which to mount an effective crisis management infrastructure later on. That, though, surely depends on the specific catastrophe and the extenuations of implementing an emergency response infrastructure that its losses trigger. Again: The devil is in the details of the with-respect-to scenarios.

Note, though, just how difficult it is for anyone, subject matter experts let alone others, to come up with plausible details about the crisis response structure to be in place after the losses incurred. To do that requires deep knowledge and realism—far more, in other words, than the much-touted “imagination” on its own.

To short, we are asked to treat possible crisis scenarios seriously until proven otherwise, when those offering the scenarios are unable to specify what it takes to disprove the scenarios or prevent their recurrence. Or to put the point more positively, what deserves ranking, and where it is possible, are those crises of sufficient detail to be triangulated upon and confirmed.

III. New metric for estimating societal risk acceptance 

It is generally understood that “acceptable-risk” standards, based on past failure frequencies and commitments of “never again,” can be fleeting and ephemeral. More, the retrospective orientation to letting past (in)frequency of failures set the standard has led to complacency and the very accident to be forestalled, as in: “Well, it hasn’t happened in the past, so what’s the problem now…”

It’s worth asking, what can be offered by way of a prospective orientation—“we are no more reliable than the next failure ahead”—to identifying standards of acceptable/unacceptable societal risk. What does “societal risk acceptance” look like if instead of being based on past frequencies, it is grounded in the expectation that all manner of major system accidents and failure lie in wait unless actively managed against?

I suggest the following thought experiment, the aim of which identifies a proxy for “acceptable societal risk.” To telegraph ahead, the proxy proposed is the aggregate curve of the major real-time control room risks of society’s key critical infrastructures.

–Assume: that society has identified critical infrastructures indispensable to its survival; that the key infrastructures have central control rooms for operating the entire systems; and that the respective control room operators have a set of chief risks that they must manage in order to maintain systemwide reliability, at least in real time. (Here high reliability is defined as the safe and continuous provision of the critical service, even during periods of high risk and uncertainty.)

While huge assumptions, their virtue is trying to operationalize the far less detailed premise of current approaches—most notably ALARP (“as low as reasonably practicable”)—that somehow “society sets acceptable and unacceptable risks,” leaving the somehow utterly without specifics.

Under the precluded-event standard of reliability (i.e., the event or a set of conditions to be prevented must never happen, given the society-wide dread associated with system failure), our research found that control operators need to be able to maneuver across four performance modes so as to maintain normal operations. Each performance mode was found to have its own chief risk.

The four modes range from anticipatory exploration of options (just in case) when operations are routine and many management strategies and options are available, to a real-time improvisation of options and strategies (just in time) when task conditions are more volatile. Control room professionals and their support staff may have to operate temporarily in a high-risk mode (just for now) when system volatility is high and options few. They may also be able, in emergencies when options have dwindled, to impose onto their service users a single emergency action scenario (just this way) in order to stabilize the situation.

The chief risk in just-in-case performance is that professionals are not paying attention and become complacent—reliability professionals have let their guard down and ceased to be vigilant, e.g., to sudden changes in system volatility (think of system volatility as the degree to which the task environment is unpredictable and/or uncontrollable). As for just-in-time performance, the risk is misjudgment by the operators with so many balls in the air to think about at one time. The great risk in just-this-way performance is that not everyone who must comply does so.

Last, just-for-now performance is the most unstable performance mode of the four and the one managers want most to avoid or exit as soon as they can. Here the risk of “just keep doing that right now!” is tunneling into a course of action without escape options. What you feel compelled to do now may well increase the risks in the next step or steps ahead (in effect, options and volatility are no longer independent).

Note that the commonplace admonitions for being reliable—don’t get complacent; avoid overconfidence; once you’ve backed yourself into a corner, quick fixes work only just for now, if that; and don’t expect everyone to comply with command and control—all recognize these chief performance mode risks on time-critical jobs.

–Step back now and further assume that estimates have been computed by control room operators in consultation with subject matter experts for the risks of complacency, misjudgment, non-compliance and closing off alternatives, within the infrastructure concerned. Such is then done for (a stratified sample of) society’s key infrastructures with control rooms.

There is no reason to believe the estimates of any one of the four key risks are the same for the same performance mode across all infrastructures during their respective normal operations. Different precluded events standards are operationalized very differently in terms of the thresholds under which they are not to operate. Complacency or misjudgment could empirically be more a problem in some control rooms than others.

Assume the performance-mode risk estimates (e.g., a stratified/weighted sample of them) have been rank ordered, highest to lowest, for these infrastructures operating to a precluded-event standard by their respective control rooms. A plot of points measured in terms of their respective Pf and Cf coordinates is generated in the form of a downward sloping function (e.g., logarithmic or regression). This function reflects the revealed allocation of acceptable societal risks at the time of calculation for the critical infrastructure services of interest in really-existing normal operations to preclude their respective dreadful events from happening.

The downward sloping function would, by definition, be a prospectively oriented standard of acceptable risk for society’s (sampled) critical infrastructures operating to the precluded-event standard by their control rooms. It is prospective because the unit of analysis isn’t the risk of system failure—again, typically calculated retrospectively on the basis of the past record, if any—but rather the current risks of real-time control operators failing in systemwide management, now and in their next operational steps ahead. Note the two-dimensionality of the prospective “next steps ahead”: It refers not only to the future ahead but also the future that has to be made for the present.

–Even though all this is difficult to detail, let alone operationalize—but less so than the conventional ALARP!—three implications are immediate.

First, because control rooms manage latent risks (uncertainties with respect to probabilities or consequences of system failure) as well as manifest risks (with known Pf and Cf), any such downward-sloping function will necessarily have a bandwidth around it. That bandwidth, however, is not one that can be chalked up to “differences in societal values and politics.” Rather the bandwidths reflect more so the control room uncertainties (often technical and procedural, but related also to unstudied or unstudiable conditions).

It is true that some real-time uncertainties to be managed are linked directly to societal values and politics—think here of those new or revised compliance regulations that followed from the last disaster—have their greatest real-time impacts. Even then, the challenge is to show how the application at this time and for this case of any compliance procedure follows from said societal values. That is no easy task because analysis would also drive down to the case or event level and not just up to the policy or regulatory level where societal values are (or so it is said) easier to identify.

A related implication is also noteworthy. The bandwidth around a societal risk acceptance function as defined above varies because not every critical infrastructure manages to a precluded-event standard. Other standards (and associated evaluative criteria) can be managed to. Even so, note how remote this acknowledgement is from any argument that societal values determine directly (or even primarily) the operative standards managed to.

An example is helpful. A primary reason why critical infrastructures manage to an avoided-events standard today—these events should be avoided, albeit they cannot always be in practice—is because their inter-infrastructural connectivity does not allow individual control rooms to preclude failures or disruptions in the other infrastructures upon which they depend or which depend on them. It is better to say that in these interconnected cases the shift from one (precluded-event) to another (avoided-event) reliability standard reveals societal preferences for interconnected critical infrastructures before it demonstrates any first-order derivation from more generalized or abstracted “societal values” per se.

Third, a very practical implication follows. It is likely that that policy and regulatory leaders who do not understand the uniquely prospective orientation of reliability professionals are apt not only to confuse their own values and views about the future for those of control room reliability professionals, but that they—the policymakers and regulators—will make mistakes because they don’t appreciate the distinctive orientation of these professionals as well.[1]

A last point when it comes to major risk and uncertainty in policy and management. In case it needs saying, the risk and uncertainty discussed above–so too the standards, evaluative criteria, and “systems”–are socially constructed and historicized. Their expression is very much of a time and of a place.

That said, acknowledging the historical, social, cultural, economic…basis of our knowledge about the complex we have been summarizing as “risk and uncertainty” has rarely gone far enough when it comes to policy and management discussed above.

For, there is the corollary of social construction and historicism: Humans can only know—really know—that which they create. (Such is the insight of St. Augustine for philosophy, Giambattista Vico for history, Roy Bhaskar for science….) Humans know mathematics in a way they cannot know the universe, because the former is a thoroughly human creation about which more and more can be made to know. Their uncertainties are socially constructed in a way that, for lack of a better word, “unknowledge” about the universe is not.

This corollary means that to accept that “Risk, uncertainty and allied notions are socially constructed concepts easily historicized” needs to be pushed further.

What is missing are the details and specifics of the connections among risk, uncertainty and associated terms that we make and the meanings we draw out for these connections, often under conditions of surprise.

Our creations are always surprising us and we seek to explain these occurrences by means of analogies that extend the range of what we call knowledge. That which we have created by way of risk and uncertainty—and continue to create—has become very complex. In fact: so complex as to continually provoke more complexity-as-knowledge and with it more action-as-complexity.


[1] What are specific direct relationships between political leaders and infrastructure control operators? At first pass, leaders would seem to be all about just-this-way command and control in emergencies  But we know of infrastructure’s reliability professionals who determine emergency declarations, as they are the best informed in real time, not political outsiders. Indeed, a big issue is ensuring “politics stays out of the control room” as much as possible. We found leaders to be important in the negative liberty sense of staying out of the way of control room operators working under just-in-time and just-for-now performance demands. As for just-in-case performance during times of low system volatility, leaders lead best by ensuring reliability professionals are able to build up their inventory of resources to be used in a crisis. In short, reliability professionals have more performance modes than leaders realize, we believe.

Worse, what is a “crisis” to control operators is not necessarily known to or regarded by those political leaders whose policies reduce operator options, increase their task volatility, and reduce their maneuverability to prolonged just-for-now performance only, among other real-time inflictions. To put the point more generally, when it comes to crisis management, the conventional literature on leadership is either top down (leaders direct) or bottom up (self-organizing). We add a third category: control rooms, and not just in terms of incident command centers during the emergency but already-exiting infrastructure control rooms that continue to operate during the emergency. Adding the third is to insist on the preexisting nature of management in which crises that would have happened did not because of ongoing reliability operations.

Principal sources. This blog entry consolidates, edits and updates earlier blogs: “A new standard for societal risk acceptance,” “Easily-missed points on risks with respect to failure scenarios and their major implications,” “Risk criteria with respect to asset versus system scenarios,” “Half-way risk,” “With respect to what?,” and “Yes, ‘risk and uncertainty’ are socially constructed and historicized. Now what? The missing corollary and 3 examples”

Playing it safe, utopia

The expression, “playing it safe,” is often used pejoratively in the US. Safety-first, Teddy Roosevelt said, will kill America. Political scientists David Edelstein and Ronald Kreb counseled a greater pragmatism in U.S. foreign policy in like manner:

…pragmatism calls for a more experimental approach to foreign policy. Creativity emerges only from an organizational and political environment that eschews rigid strategy and tolerates failure. Successful organizations adapt fluidly to changing circumstances, create cultures that permit experimentation, and learn from their errors. The first rule of foreign policy should remain “Do no harm,” but much international harm can come from playing it safe. The United States must cultivate a bureaucratic and political climate that is forgiving of small failures. Only in that atmosphere can the country’s foreign-policy makers go after the big wins—and leave strategizing behind. (my bolding; accessed online on September 19 2010 at https://www.foreignaffairs.com/articles/2015-10-20/delusions-grand-strategy)

Our research on society’s critical infrastructures, however, suggests there should be no-go areas in policy, including foreign policy, precisely because what constitutes a “small failure” and “forgiving” are not possible to predetermine or define in a foreign affairs world of high unpredictability and complexity.

–Pause and consider what harm is created by “playing it safe” and under what conditions. “Innovate or evaporate” sums up one set of conditions. If we don’t change, we die, as in Lampedusa’s “If we want that everything remains the way it is, everything must change.” There is no alternative, Mrs Thatcher repeated.

There is also the problem with that word, “playing.” It’s as if people are not serious because they are just playing around, when they could/should go further. It’s as if you’re standing still by playing safe in the same way that “maintaining the status quo” has often been (wrongly) equated to a do-nothing option. Here playing it safe borders on indifference, no longer mustering anything more than going through the motions. (And who wouldn’t seek the shelter of “playing it safe” after being continually harangued to “take control of foreign affairs”?)

Then, there is the sense in which “playing it safe” is impossible anyway. People continually deviate from the prescriptions of probability and expected utility. Or better yet: Prescriptions of probability and expected utility continually deviate from people’s actual behavior. Many people play-act safety, not because they are burned-out but for the opposite reason: They’re going to do otherwise, period.

–But whatever the case at hand, the more immediate question, empirically and logically, is: Under what conditions is not playing it safe the equivalent of running into danger?

One example will have go suffice. Say you are on one of the upper floors of a skyscraper, looking out on the morning. That is Reality 1: You are the observing subject looking out at reality. After a point, you realize that spot in the distance is actually a plane headed toward you, here in the World Trade Center. That is Reality 2: You become the object of reality, in that grip of the real, and no longer the observer.

There is, however, Reality 3. This is of the air traffic controllers during 9/11. Neither the observer of the first reality nor the object of second, these professionals achieved the unprecedented without incident that day. They were instructed to land all commercial and general aviation aircraft in the United States—some 4,500 aircraft—and did so. They played it safe.

Without overdrawing the point, so too do we demand our professionals land those water, electricity, transportation, telecommunications, and many more critical services every day without major incident. If that is also playing it safe in high-risk situations, I’ll take it any day.

–If so, then Reality 3 is the domain of really-existing utopia. I’ve had little good to say about macro-designers and their utopian visions in this blog. But my critique of macro-design is not a critique of utopianism.

In fact, the search for utopia and the search for the reliability I’ve been talking about have been tied together for a very long time. Sociologist, Zygmunt Bauman, wrote: “To put it in a nutshell, we dream of a reliable world, one we can trust. A secure world. ‘Utopia’ is the name which, courtesy of Sir Thomas More, has commonly been given to such dreams…”

These are the utopians I’ve been describing. They are the ones in the operational middle who see something new in translating the patterns they see and the scenarios they face, something critical that was not there before nor could be there without that real-time translation and improvisation. This is the utopianism of Reality 3; this is the only utopianism we now practice. Pity, we may be losing these professionals before most of us even knew they were there.

New environmental narratives for their End Times (longer read)

The end-time crisis of next-ism. Many ecologists and environmentalists I’ve familiar with insist that more things can go straight-out, hair-raisingly wrong than they can go right. It is easier to mismanage an ecosystem than it is to manage it. Ecosystem collapse is more certain than ecosystem sustainability; negative externalities are to be expected, positive ones not. Closing time in the gardens of the West is right now. . .always now. What happens next is even worse.

Nature on its own is too complex to control, but our mismanagement of nature unleashes forces we ourselves cannot control. Probabilities of large system failure and cascades are primed to flip to 1.0 in no time flat.

If livestock numbers are rising, it must be because of “a tragedy of the commons.” If resource extraction is going up, it must be because of “globalization.” If trees are being cut, it must be “deforestation,” however many trees are being planted elsewhere. If the Sahel moves, it must be because of “desertification” or “worsening global climate change.”

They insist that the majority of people don’t see how bad things are, only they do; that their minority has no power, only the others have; and that it’s never too late for serious talking, but it’s too late to expect the majority to give them a serious hearing.

We must manage the planet’s resources better, but no one can expect technology to help us do so. Economic growth is never a sufficient condition for improving the environment, while economic growth’s irreversible impacts on the environment are always a sufficient condition for precaution. Except, however, when failure is not an option! So much is uncertain that anything is possible, and “thus” everything must be at risk. Whatever humans touch they make worse, this Barry Commoner’s Third Law of Ecology.

What worked yesterday—the very thing we thought we knew and depended on—bites back today, with tomorrow looking very much the worse. So much of the once-good turns out to be the now so-bad. “The dark side of human nature has no noticeable circumference,” writes poet, Sebastian Barker. Indeed, what follows good and bad is more bad.

Let’s call my colleagues’ standpoint, next-ism.

This—realism, free-floating anxiety, existential panic, dog-whistle alarmism—describes a world clearly not made to my colleagues’ specification. There is not the slightest intimation or whiff of possibility that the decades of environmental advances since the 1960s have been a noble experiment. Nor is there acknowledgement it’s no surprise that if you spread environmental practice worldwide—plant more trees and such!—you spread tree diseases worldwide, or such.

Nor is there a scintilla of recognition that their exhortations to get us to do the right thing by way of the environment pale and wither before the historical record of really-existing humans with real problems in real time who do not follow all orders given them, even in the most authoritarian of regimes (as we now know for communist East Germany and China).

Nor is there a scintilla of recognition that the major feature of their disaster scenarios aren’t the disasters but the massive lack of attention to the multiple ways necessary to triangulate and increase our confidence that these disasters can happen, given they are so grave and concerning in the first place. But there’s no time for that!, next-ism asserts.

Where does Next-Is-Worse leave us?

For me, it’s easier to understand why “the environmental movement” is blamed for failing to stop or otherwise mitigate anthropogenic climate change, species extinction, and/or biodiversity loss. Corporate and economic interests clearly can and do brainwash us into believing things are better than they are environmentally. We don’t need any more evidence to substantiate that! But one can scarcely credit the same interests for having brainwashed my colleagues into next-ism.

***

So, what to do? Instead of rushing to the counter-evidence, let’s assume that next-ism is warranted. The question then becomes: Even if true, can we push next-ism further by way of specifics?

I think we can. For that matter, when you push next-ism toward its logical and empirical conclusions, it quickly begins to look less fatalistic.

–In this thought experiment, let’s agree that the Anthropocene’s rotten core is modernity—international capital, American consumerism, global urbanization—while in the same breadth insist all this is best described in the very terms of modernity: Anything and everything is at risk; all thinkable risks are premonitory; any can be catastrophic. Ensure this angst has no closure, nothing prevents the proliferation of worst-case scenarios, the dose makes the poison, how then to plan…

Oops, did I write, “plan”? Since when are epochs a planning horizon for anything? And here you thought policy and management in the Holocene were difficult!

Economists will have to give up their discounting the future into present value terms, as the future is Anthropocene shockSHOCKs. Engineers and ecologists dither with resilience and adaptive capacity, as if bouncing back or forward is optimal over a longue durée that by definition can’t be optimized. Our predictions—and we do insist on forecasting!—will be as effective as predicting the next poem from the poet’s body of work.

We’ll look back at relegating “progress” to the scare quotes of always-late capitalism as the easiest thing humans did in the Anthropocene. All the while, alarmism fills the vacuum left behind by lack of remedy-and-implementation at a level of granularity that global and planetary explications of cause-and-effect do not have.

–The problem is this: The long-term and the planetary are deployed so as to nail home the interconnectivity of it all. Everything is connected with everything else—without however acknowledging this must also mean nothing is completely reducible to anything else. Relations stop nowhere, novelist Henry James put it, but they are nevertheless discrete relations. We can’t ignore irreducible particularity just as we cannot ignore interrelatedness. The burden of proof, however, is on the universalizing interconnectionists to detail why and how and in what forms this messy, vernacular particularity—and its allied notions of “case” and “context”—arise and endure.

Specifics matter more than ever, precisely because we are in the Anthropocene.

–The specifics I have researched entail a more granular focus in environmental scenarios on real-time operations of human societies’ key critical infrastructures within a regional context–especially if your concern is as environmental as those that drive the Anthropocene:

  • Granular because risk and uncertainty are always with-respect-to specific failure or accident scenarios–and the devil is in the details of the scenarios;
  • Real-time operations because the measure of effectiveness is to manage effectively now and within the Anthropocene;
  • Operations of key infrastructures because the reliability and safety of these large socio-technical systems–think critical energy and water supplies–are not only vital to society, immediately, but are often based in ecosystem services mandated for restoration or sustainability; and
  • Within a regional context because Global Climate Change modeling and other types of environmental modeling accept the region as the unit of analysis for near-term risk and uncertainty management. (High-resolution models using LIDAR data and other GIS approaches already exist that provide climate-related flooding and wildfire information useful for critical infrastructures when it comes to their nearer-term cycles, e.g., for investment and depreciation purposes.)

***

But what are the specific environmental scenarios? Some environmental next-ists might say that the above is too little/too late or that the time left doesn’t allow for such fine points. Others would, I think, want to see what these alternative scenarios look like before concluding that going any further is a waste.

Certainly the above argues that any next-ist assumption that “all scenarios are more or less bad” when it comes to environmental futures is on the face of it anti-empirical and willfully so. Next-ists aren’t interested in the fact that the absence of a stopping rule for failure scenarios implies as well no stopping rule for positive scenarios about the future. Yet, is it any less implausible to argue that there are condition under which, e.g., the more critical infrastructures are reliable, the more willing are citizens then to pay taxes, where these added tax revenue can pay for a better legal system, and a better legal system helps keep the peace and infrastructures reliable?

If the challenge is to identify specifics—that more granular focus on real-time operations of societal institutions within a regional context now that we are in the Anthropocene—it pays studying those whose current jobs are to do just that.

Five (5) groups who seek to do more than cope with the real-time Anthropocene are identified and sketched in below.

1. One group is found in the control rooms and surrounding support staff of large critical infrastructures--the villains of the piece in many environmental crisis narratives. Yet these infrastructures, particularly water and energy, are based in ecosystem processes and services and many operate under the dual mandate of maintaining service reliability while at the same time safeguarding, if not actually restoring, associated ecosystems.

The more I studied control room operators, the more I learned they are far from environment’s enemy. Turn to three neglected storylines based on the really-existing practices of reliability professionals in highly complex socio-technical systems:

Practice 1: Bring ecologists, biologists & renewable energy specialists directly onto the floor of the infrastructure control rooms. This is already being done, but not to the extent it is now possible. Why? Because if environmental specialists cannot now reliably advise on real-time infrastructure operations (whose services, like large water supplies, are founded upon ecosystem processes), why would we believe that those promising, say, a Sustainable 2035 will know how to do so before 2035, although the real-time details elude them now? If in the real world we are only reliable as the next case ahead of being reliable, why ever is doing well in the next case any less important than for the ad seriatim catastrophes ahead?

Practice 2: Redefine system boundaries. Wetlands have been an iconic ecosystem in ecologists’ stories. Yet wetlands serve as “ecoinfrastructures” in other large system definitions. Those that moderate the effects of wind and waves on the adjacent levee structures are part of the levee system definition just as the levees provide an ecosystem service by protecting these wetlands in other adverse events.

In a storm, a single stretch of road may become an essential part of repair access for electricity lines as well as the means of access for levee floodfighting crews. In this case, the stretch of roadway becomes part of the emergency response of two infrastructures. A roadway between wildlands and the other side of the road’s the electricity distribution lines can serve as a firebreak in the emergency response system for the approaching wildland fire.

From this standpoint, it need not be agricultural versus urban versus environmental. From one perspective, it looks like three separate systems in competition with each other: a forest next to grazing land next to arable fields, no one of which can expand without loss to the other. From a perspective that treats them as subsystems to one ecosystem, the grazing land serves as a firebreak between the forest and arable holdings.

So too the California Delta can be seen not just as its own system but also as a buffer against encroaching urbanization from the east (Sacramento and Stockton) and west (San Francisco Bay Area), much as agriculture in South Florida and Western Netherlands have buffered against urbanization moving into the region’s “green” areas.

It follows that a key empirical issue is where that extra investment would produce the greatest positive impact on the ecosystem and landscape: planting trees and greenscapes in Sacramento or Stockton (urban ecosystems); reducing chemical agriculture on Delta islands (agricultural ecosystem); and/or constructing more wetlands around Delta islands (the environmental ecosystem).

Practice 3: Act on the full implications of the infrastructure control room as a key institutional & organizational formation for ensuring the high reliability mandate of improved ecosystem services and processes. Control rooms in large critical infrastructures are one of the few institutional formations that have evolved over time and across multiple contexts to promote high reliability repetitively in the management of complex socio-technical systems.

The implications are considerable. We keep hearing that global problems must have global solutions. If true, those solutions will never be highly reliable at that scale. There is, for example, no global water infrastructure nor a cadre of its real-time managers in the foreseeable future.

All of which explains why the shift away from global climate change models to regional ones is so significant. (We’ve embarked on doing so in California.) It is far more plausible to imagine water and energy control rooms coordinating at the regional level than globally, when it comes to collaborating.

2. Where the key point holds—our models and narratives must become more granular with respect to time and scale for the systems—then we also have a way of recasting the debate in ecosystem management and restoration. In so doing, we identify another source of future environmental narratives--and one more fitting with global and regional complexity.

Two ideal types, the carvers and the molders, dominate narratives about ecosystem conservation and restoration. As idealized, carvers see their task is to release the true ecosystem from the surplusage around it. Chip away overpopulation, chisel off the built environment, get rid of the non-natives species and eliminate pollution—only then does the ecosystem as it was meant to be have a chance of being disclosed and sustained. In the carving orientation, the ecosystem manager or restorer assumes the landscape has within its remit the good form and function created for it as nature, not by us.

The second ideal type are ecosystem managers and restorers who see themselves essentially as modelers of clay (sometimes, literally). They mold the landscape by trying to press onto it contemporary versions of complexities it once had. Here there is no prospect of repristinating nature. Ecosystems have to be designed and maintained, albeit their complexity may be little like the pre-disturbance or pre-settlement states. (Indeed, the grievance that ecosystems are continually degraded signals landscapes are moldable.)

–Now comes the important part. Unsurprisingly, really-existing ecosystem managers and restorers have fallen somewhere between these two textbook orientations—they’re ideal types after all—making due with what’s at hand and with what is possible. What is clearer now, however, is that this good-enough improviser is itself a third ideal type for ecosystem management and restoration.

In effect, a newly credentialed environmental professional starts with the expectation that the “ecosystem” or “risk” or “tradeoffs” are out there to be identified, only to realize in the field that each has to be specified in far more detail (risks with respect to what failure scenario? under what conditions does your solution hold? just what is it a case of that everyone is worrying about?), and where the environmental professional gradually recognizes that his or her challenges arise because what is out there depends on how “it” can be defined or managed or improvised in the first place by really existing human beings in the really-existing organizations and systems they find them in.

Improvisation for its part has its own idealized and practical benchmarks and practices. You see this, most prominently, where cities are discussed as “urban ecosystems.” Cities are highly differentiated systems with their own improvised sets of species and processes that have in some cases considerably more biodiversity than commonly supposed.

From this perspective, not only will there be multiple benchmarks (which actual improvisation inevitably falls short of ideal improvisation), but the scenarios of success or failure (actually, effectiveness) will also be with respect to different real-time uncertainties than those that perplex carver and molder. We should expect from this crucible of granularity will come new, more case-specific environmental narratives.

–What might the case-specific narratives look like and why would they matter? The widely-identified pollution in China has been credited in significant part to its coal-powered electricity plants and other hazardous facilities. That may be true as far as it goes, but here this point needs to be pushed further.

I, for one, want to know more about the real-time conditions under which middle-level operators and managers in China are operating these large-scale infrastructures. Are the reliability professionals not there or are they there but operating under ever more prolonged “just-for-now” conditions waiting from more options and better practices? We need to hear from Chinese scholars researching regional high reliability infrastructures (including its massively significant high-speed rail system).

3. Environmental policymakers and academics have always been a heterogenous group and they too are a source of new environmental narratives. One example will have to suffice, this one returning us to the importance of the region as the unit of environmental analysis and action.

An influential policy and management arena in the U.S. and abroad revolves around “environmental governance.” Here I focus on an early researched formulation. Delmas and Young (2009) present a simplified schematic for understanding environment governance in terms of multi-level interactions (local, regional, national, international) among three main “actors” (public sector, private sector, and civil society).

Delmas and Young plot some interventions into Figure 1, drawing from the case studies and associated literature reviews in their edited volume, Governance for the Environment: New Perspectives. For our purposes, note the environmental arenas where multiple spheres overlap, particularly those related to what has been called eco-labelling, placed at the center of Figure 1 (the shared area of the three intersecting sectors).

One chapter in the volume (Auld et al 2009) gives considerable attention to eco-labelling interventions in terms of third-party certification schemes that ensure goods and services are sustainably sourced. For example, we have programs that certify the produce is organically grown, the coffee is fair-trade, and the timber comes from forests sustainably managed. Such certification programs typically work on two fronts, first by incenting consumers to buy certified products, while discouraging them from purchasing non-certified products or services.

Now the new environmental narrative: Recast the role of eco-labelling. A major, persisting problem in the California Delta is deep concern over the reliability and safety of the levee (dike) system protecting island agricultural activities there.

Imagine a third-party program (i.e., some organization different from the vested interests of the US Army Corps of Engineers, California Department of Water Resources, and Delta-based reclamation districts) that certified whether or not any given Delta agricultural land (broadly writ to include livestock, aquaculture and non-traditional crops) was protected by levees that met a standard of high reliability in design and maintenance. Imagine consumers would be encouraged to buy “levee-certified” goods and services and discouraged from buying those that were not so certified. Imagine, in short, the same infrastructure element—the levee—but now having a different function than “keeping water out” only.

The wider buying public in California and beyond would be encouraged to purchase only those goods and services from adjacent country entities that had supported levee certification in and around the Delta water intake for the county (or with respect to any county in similar circumstances). In like fashion, the wider buying public would be discouraged in purchasing from those entities whose goods had been transported on the deepwater shipping channels passing through the Delta to Sacramento and Stockton, if those firms did not support levee improvements up to third-party certification standards along those shipping channels. In parallel, the wider buying public would be encouraged to buy agricultural products only from those Delta islands that had been levee certified and discouraged from buying that which was levee uncertified.

4. The fourth group from whom to expect new environmental narratives (not just regional but “big picture” ones) are practicing ecologists and environmentalist themselves. This group is more differentiated than given credit for in overarching terms such as “movement” and “discipline.” Indeed, that differentiation has been there from the beginning and therein lies its own under-acknowledged environmental narrative.

The term, “ecosystem,” comes to us through A.G. Tansley’s 1935 article, “The Use and Abuse of Vegetational Concepts and Terms.” He has been criticized for his role in colonial British ecology, but here Tansley is of salience for two different reasons.

–First, ecosystems for Tansley make no sense without taking humans and their interactions with the landscape into account. “We cannot confine ourselves to the so-called ‘natural’ entities and ignore the processes and expressions of vegetation now so abundantly provided us by the activities of man. Such a course is not scientifically sound, because scientific analysis must penetrate beneath the forms of the ‘natural’ entities, and it is not practically useful because ecology must be applied to conditions brought about by human activity,” he wrote.

This might seem to be pushing at an open door today, but Tansley deployed a discourse quite different than his contemporaries, the U.S. ecologists. Those were just-so stories about “climax communities” evolving on their own—if and only if devoid of human beings mucking things up. Two commentators on Tansley’s work (Laura Cameron and John Forrester, 2017) argue that his “principal contributions were, in contradistinction to American ecology, to emphasize the systemic interrelations of human activity and botanical phenomena—he sees no real difference between those ecosystems which are natural and those which are ‘anthropogenic’ (nature ‘produced by man’, as he glossed in 1923).” “A well-defined localized human community is the kernel of an ecosystem,” Tansley reiterated in an address to the British Ecological Society in 1939.

–But Tansley is important to us for another reason. Not only was he a founder of the British Ecological Society (the precursor to ecological societies in many countries) and the Nature Conservancy, he was also well-known and respected member of the British Psycho-Analytic Society, having been analyzed by Freud for nine months in 1922 and 1924. For Tansley, humans and their desires (“energy”) were and are never far away from ecosystems in the profoundest way.

Whatever the reader thinks of Tansley’s dated terminology, we see many ecologists today still take human desires as anything but The Enemy. Such, I’d like to think, would have appalled a Tansley who took desire and ecosystem to be inseparable. He’d be the last person, I suspect, surprised or shocked by large critical infrastructures, created to satisfy desires and wants, as having environmental impacts, bad and good.

–Where are the new environmental narratives in this? A major one, I believe, is sketched below.

The wider stream of thinking in ecology—”a well-defined localized human community is the kernel of an ecosystem”—suggests that human-dominated landscapes may well provide the only experience most living humans have of anything like the “presettlement template.” The antimony (settlement v presettlement) is no antimony in these cases: The human world provides some of the most enduring examples of “repristinated nature” we have ever had.

How so? The poet and essayist, Jane Hirschfield, writes: 

Hiddenness. . .is a sheltering enclosure – though one we stand some times outside of, at other times within. One of its homes is the Ryoan-ji rock garden in Kyoto: wherever in it a person stands, one of the fifteen rocks cannot be seen. The garden reminds us there is something unknowable is always present in life, just beyond what can be perceived or comprehended  – yet as real as any other rock amid the raked gravel.

What is being described is the inability of the observer to hold a stable focus on what is seen and unseen. For Hirschfield it is a rock garden in a city. For his part, John Berger, art critic, writes of another landscape: “The scale is. . .of a kind which offers no possibility of any focal centre. This means that it does not lend itself to being looked at. That is, there is no place to look at from. It surrounds you but never faces you.” 

Imagine here herders moving onto an empty, horizon-less plain; or night watchers looking up into the open, depthless annihilation beyond. But look they do and that look is there as much as what is seen without scale or end. “It should be like a river gorge with swans flying overhead; the river has no desire to retain the swan, yet the swan’s passage is traced out by its shadow without any omission,” or so Joseph Needham translates the Song scholar, Lin Jing-xi.

That very same experience–the emptied focus and scale when looking at the not-all-there—can be felt in different contexts of the human world. To assume or act otherwise is, I believe, to deny the narrative that something like “nature’s sublime” can be and is experienced in everyday life, and has always been.

5. Which leads to the fifth group to provide new environmental narratives—you, the reader. You get to cast and recast what to make of the exigencies and contingencies that befall you—and in radical ways when it comes to "environment."

Start with what many would consider unexceptional, a point made bu Adam Phillips, the psychoanalyst and essayist: “Given the obvious contingency of much of our lives—we do not in any meaningful sense intend or choose our birth, our parents, our bodies, our language, our culture, our thoughts, our dreams…and so on—it might be worth considering not only our relationships to ourselves and our relationships to objects, but (as the third of the pair, so to speak) our relationship to accidents”.

Fair enough, were it not for Agnes Heller, the philosopher, concluding exactly the opposite and because of the same contingencies:

In choosing themselves, men and women choose exactly what they are, as they are. They choose their best talents as much as their physical handicaps, they choose their parents, their childhood, their country, their historical age. They choose their poverty if they happen to be born poor, and their riches if they happen to be born rich. They choose their accidental features. That which they are by accident they become by choice.

Putting it that radically, Heller stirs us to ask in what sense is her point also true. But in positioning yourself somewhere between Phillips and Heller, you too become expert in recasting contingencies, exigencies and their environments. What recasted narratives are you handing down, now (the ancient Greeks call this paradidomena)? Or more bluntly: What’s next in these End Times is up to you.

Principal sources. This entry consolidates, edits and updates earlier blog entries: “Next-ism,” “To-do’s in the Anthropocene,” “Radical uncertainty and new environmental narratives,” “Eco-labelling recasted,” and “Nature.”

Yes, “risk and uncertainty” are socially constructed and historicized. Now what? The missing corollary and 3 examples

–We know the subjective is never far away from risk and uncertainty. Risk and uncertainty can also be historicized as formations of their time and place when you take the longer view. (Not only is your risk not mine; 19th century uncertainty looks very different from 21st century versions.)

That said, acknowledging the historical, social, cultural, economic…basis of our knowledge about risk and uncertainty has rarely gone far enough when it comes to policy and management.

–For, there is the corollary: Humans can only know—really know—that which they create. (Such is the insight of St. Augustine for philosophy, Giambattista Vico for history, Roy Bhaskar for science….) Humans know mathematics in a way they cannot know the universe, because the former is a thoroughly human creation about which more and more can be made to know. Their uncertainties are socially constructed in a way that, for lack of a better word, “unknowledge” about the universe is not.

This corollary means that to accept that “Risk and uncertainty are socially constructed concepts easily historicized” needs to be pushed further.

What is missing are the details of the connections among risk, uncertainty and associated terms that we make and the meanings we draw out for these connections, often under conditions of surprise. (Lord Curzon was so surprised when watching soldiers bathing during WWI: ‘I never knew the working classes had such white skins!’) Our creations are always surprising us and we seek to explain these occurrences by means of analogies that extend the range of what we call knowledge.

In case it needs saying, terms like “system,” “failure scenario,” “with respect to” and more discussed in the other blog entries on risk and uncertainty are also rooted in time and place. But to stop there, again, stops short of the wider point: That which we have created by way of risk and uncertainty—and continue to create—has become very complex. In fact: so complex as to continually provoke more thinking and more complexity-as-knowledge.

–Here by way of example are three such complexities about risk and uncertainty in policy and management that arise solely because of the connections made and meanings given by way of analogies:

• The focus on present risk and uncertainty in critical infrastructure—at the component or system levels—in an odd way volatilizes the infrastructure’s longer-term. It’s as if the preoccupation with current risk and uncertainty hammers the longer-term into current notions of risk and uncertainty as well.

This tenderizing of today’s meat we call “the longer-term” is not altogether unreasonable, of course: Longer terms are described as full of uncertainty and risk. But that is not the only we frame “longer-terms.” Think Ben Franklin: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.” Or: Intelligent people find it no problem to use their minds to subvert reason. In either view (two among many), it’s our Bayesian brains, not their “uncertainties,” that are the problem.

• Second, the methodological computation of “risk numbers” or “risk scores,” say, by asset category or for different components transforms infrastructure risk and uncertainty into a singular event—if you will, a single display of what is an ensemble of the heterogeneous and contingent (qualitative and quantitative information, by these and not those subject matter experts…), here but not elsewhere, and at this point in time and not another.

Put this way, risk rankings look like performance pieces in the arts. Over the course of the day, the chief risk manager serves as the curator of installation artifacts called risk scores for this or that part of the critical infrastructure—for facilities here, for pipelines there, for compressors all over the place. Each ranking and each of score is a one-off, akin to what Surrealists call frottage, producing impressions on a piece of paper by rubbing with a pencil or crayon—think: rubbing with a methodology—over an always uneven surface.

(Or if you don’t like the art analogy, think of risk scores akin to novel financial instruments, such a customized CDOs, each a one-off, so complex and heterogeneous they can’t be compared except in highly nominalized terms like price, i.e., only in terms of a “score”.)

• Third, each custom-made risk score or a ranking of scores ends up as an odd kind of abstraction. Risk is abstracted from real-time operations into risks associated with specific assets, components and processes that by definition do NOT add up to the infrastructure as a system operated and managed in real time. Once outside the precincts of real time and the system as system, the temporal and spatial are abstractly foreshortened and elongated—the system is spatially segmented into components and extended elastically as the focus of attention—for almost everything except that system as a system.

Since attention is a scarce resource, prolonged attention given to abstracted risks at the asset, component and process level diverts resources away from systemwide reliability and the distinct risks faced in real-time management there. This matters because that real-time management for systemwide reliability in its centralized control room is about the only place in the infrastructure we observed that didn’t lose sight of the infrastructure operations as an articulated whole.

A few things I learned from the Financial Times about regulation

Post-crisis banking re-regulation adds significant costs to the economy and thus reduces growth, while the pre-crisis light-touch regulation undermined the very financial infrastructure necessary for economic growth.

What were indicators of positive economic growth under light-touch regulation—rapid uptake in home mortgages—were indicators of regulatory failure later on. Indeed, mortgages were a relatively safe asset for banks to own, until they were the source of unimaginable losses.

Overregulation is nowhere better illustrated than in comparing the Dodd-Frank nearly 2000 pages of legislation to the less than 20 pages of the Depression’s Glass-Steagall Act—but under no circumstances are our regulators to repeat the 1930s! Whatever, those who lobby for simplifying regulation end up making it more complex.

It’s a bad thing for regulation to try to squeeze too much risk and complexity out of banking, especially when fresh risk reduction—less leverage, more capital reserves—is itself too risky a strategy. Regulation discourages risk taking and only with risk taking do we have innovation, except when too much innovation and risk taking are encouraged as in the deregulated finance sector up to the 2008 crisis.

New financial instruments, particularly derivatives, flowed to where they were not regulated, but regulated financial instruments always increase opportunities for perverse arbitrage and loopholes. If the last financial crisis showed anything it’s that we need systemic risk regulation and macro-prudential policies, but, to be sure, it’s individuals, like Alan Greenspan, who were to blame for things going terribly wrong.

Regulators must always have the best information, even when those regulated—the banks and investment firms—haven’t a clue as to their current real-time positions. Regulators will never have the capacity to know the real-time positions of actual firms, except in those cases where firms, like Lehman, insisted regulators did have the real-time information.

Global business and supply chains are great, except when the firms are too big to fail. Country defaults are horrible, except where they work through being regulated de jure as in Argentina or de facto as in Mexico.

Global markets are a permanent fact of life, but we must never suppose that the drive to regulate them for the better is just as permanent. Markets are best at price discovery, except where market efficiencies are realized because of lack of transparent discovery, as in unregulated dark pools.

In sum, what I’ve learned from the Financial Times is that capitalism is in crisis because of the shambolic understanding of regulation.

More on over-complexification

Two drivers of not-knowing, inexperience and difficulty are often conflated—information overload and cognitive undercomprehension—and the conflation increases the sense of more complexity in policy and management.

–Think of information overload as the “right” information lost in the glut of information before us. Cognitive undercomprehension, in contrast, is our cognitive limitation to recognize anything like “the right information.”

Overload means we would be high-performing analysts and managers if only we were to tease out the right information from all the noise obscuring it; undercomprehension means we are held to such high-performing standards we couldn’t possibly know the right information, even if it were visible before our very eyes. “I could do my job if only I had the right information” is not “No one could do the job I’m tasked with, whatever the information available.”

Over-complexification comes into play when remedies for one produce the other or complicate both.

–For example, making sense of the masses of Big Data requires algorithms no human beings on their own can comprehend. To that degree, what was information overload ceases to be that by triggering cognitive undercomprehension.

On the other hand, reducing high information overload can be associated with increasing cognitive comprehensibility, but here the costs of doing so may be too high. You reduce the complexity of your conceptual model in order to make it comprehensible, but in the process you’ve increased the chances your model is biased, i.e., that it differs from the correct one. “There are no 99 per cent probabilities in the real world. Very high and very low probabilities are artifices of models, and the probability that any model perfectly describes the world is much less than one,” underscores UK economist, John Kay.

–There is also the problem of society demanding both information overload and cognitive undercomprehension at the same time, however unintentionally. A common enough observation is that when the task is to surveil, as it is for our regulators of record, they will always want more information, no matter how much they already have. In this way, the regulators suffer the double-whammy of information overload and cognitive undercomprehension: They have more information for use but not enough cognitive capacity and skill to extend their limits of cognition on using it.

–Many upshots follow on information overload and cognitive undercomprehension occurring together. Two obvious ones deserve more highlight here.

First, at or beyond the limits of cognition, not only is prediction and forecasting difficult, so too is identifying the counterfactual conditions, not least of which is what would happen if overload and undercomprehension were assuaged. We become very much like amateurs in all this.

Second, arguments presented to us as policy relevant solely because of their diamond-sharp clarity rarely get beyond the joke stage. The usual criticism—policy and politics have become an endless stream of stark media images competing for scarce attention—is true, but even so: The joke is that the stream of photo-clarity means more and more murk, both as information overload and cognitive undercomprehension.

The third implication follows on from the preceding two: Don’t give in to the temptation to simplify, even here as amateurs for a topic this complex.

When an experienced county emergency manager told a group of us, “Floods are complex events, they have many variables,” it wasn’t helpful to tell him, as some did, he’d be much better off first simplifying those events for the purpose of modeling and simulation. To assume he needed to understand the flooding better ignored that he was already managing the complexity there.

A much more effective starting point, it seemed to me then as now, would be to identify professionals who are themselves already managers of complex risks and uncertainties, such as this county emergency manager, and then ask how can we help them, if at all.

Related blog entries on over-complexifying: “Even if what you say is true as far as it goes, it doesn’t go far enough…,” and “Public Policy Analysis, c.1970 – c.2020: In Memoriam?”