Triangulating complexity for policy and management

I

–A key lesson to take away from a system being highly complex—i.e., having a high number of elements, different functions for those elements, and interconnections between elements and functions—is the centrality of triangulation in decisionmaking.

You need as many different approaches to analysis and strategy as you can—qualitative, quantitative, reductionistic, holistic, positivist, post-positivist, more (computational, bargaining, judgment, inspiration, more)—in the hope that you converge from diametrically different directions on a common factor to consider. Such triangulation is most successful when “Whatever the direction you look at this issue, you get to this same point. . .” Familiar examples are the importance in development of women and of the middle class(es).

–Triangulation here is the use of multiple methods, databases, theories, disciplines and/or analysts to converge on what to do about the complex issue. The goal is for analysts to increase their confidence–and that of their policy audiences–that no matter what position they take, they are led to the same problem definition, alternative, recommendation, or other desideratum.

In doing so, the analyst accommodates unexpected changes in positions later on. If your analysis leads you to the same conclusion regardless of initial positions already highly divergent, then the fact you must adjust that position later on matters less because you have sought to take into account utterly different views from the get-go.

–Everyone triangulates, ranging from the everyday cross-checking of sources to more formal use of varied methods, strategies and theories for convergence on a shared point of departure or conclusion. A popular form of triangulation is the use of multiple—the “tri” doesn’t mean three only—methods. Methodological triangulation figures prominently in applied fields (though not all!), e.g., practicing policy analysis, marketing, investigative journalism, and participatory rural appraisal, to name several.

–Triangulation is thought to be especially helpful in identifying and compensating for biases and limitations in any single approach. Obtaining a second (and third. . .) opinion or soliciting the input of the range of stakeholders or ensuring you interview key informants with divergent backgrounds are three common examples. Detecting bias is fundamental, because reducing, or correcting and adjusting for bias is one thing analysts—better yet, human beings—can actually do.

Triangulating on a common point is in no way guaranteed a priori just as canceling out biases—be they cognitive, statistical, cultural, other—cannot be assumed to have occurred as a result of triangulation. (Anyway, it remains an open question which biases are most important–material interests, cultural beliefs or built-in cognitive biases, among many other candidates.) To the extent that bias remains an open question for the case at hand, it must not be assumed that increasing one’s confidence automatically increases certainty, reduces complexity, or gets one closer to the truth of the matter.

–That said, failure to triangulate also provides useful information. When findings do not converge across multiple orthogonal metrics or measures (populations, landscapes, times and scales…), the search by the analysts becomes one of identifying specific, localized or idiographic factors at work. What you are studying may in reality be non-generalizable—that is, it may be a case it its own right—and failing to triangulate is one way to help confirm that.

Triangulation is consuming and expensive. Limitations on time, money and other resources make it infeasible to employ multiple interviewers, multiple methods, and/or multiple databases as much as one would like. A second problem is that inexpungible bias. No matter how many cross-checking questions in the survey, they cannot correct for the fact that the interviewer is white, male, middle-class and asks questions in English only. On the other hand, while triangulation is time consuming and expensive, so too taking positions in its absence can prove costly.

II

–Much of the above should not be news. What is new, I think, is the importance of mixed methods and approaches for better honing bias.

To return to our starting point: The approaches in triangulation are chosen because they are, in a formal sense, orthogonal. The aim is not to select the “best” from each approach and then combine these elements into a composite that you think better fits or explains the case at hand.

Why? Because the arguments, policies and narratives for complex policy and management already come to us as composites. Current issue understandings have been overwritten, obscured, effaced and reassembled over time by myriad interventions. To my mind, a great virtue of triangulation is to make their “composite/palimpsest” nature clearer from the outset.

–Not only is triangulation not about assembling a “seamless” explanation from parts of the pre-existing frameworks or methods you value; it in fact asks you to undertake a kind of analysis that runs against the grain of assuming coherence and seamlessness. To triangulate asks what, if anything, has persisted or survived in the multiple interpretations and reinterpretations that the issue has undergone over time up to the point of analysis. As we just saw, finding fewer and fewer resemblances across the family portraits in a complex policy is a very important point for more and more case-by-case analysis.

–There is irony in this. It turns out that another way to build up the policy audiences’ confidence that you do know what you’re talking about—at least, compared to them—is to undertake a policy analysis as moments of further questioning and reflection: Just what is the problem or problems, just what constitutes the evidence we need to consider, just what are criteria to evaluate options, and so on. (Just what is “development,” if not case by case?) Increasing the sense of social construction of these matters turns out to lay the basis for triangulating as well—though it should go without saying there are no guarantees in this.

The New Normal is managing not just negative setbacks but positive ones as well

Setback management: handling a sudden or unanticipated check on organizational or institutional behavior that would likely lead to a worse mess unless managed. This means trying to pull the good mess out of one that could go bad by treating the setback as: a design probe, a test-bed for something better, an interruption from which the organization learns, or an obstacle to overcome so that the organization moves to a new stage of its life cycle.

–Setbacks—unanticipated, unwanted, and often sudden interruptions and checks on moving forward—are typically treated as negative and fairly common in complex policymaking, implementation, and operations.

Less discussed are positive setbacks. Best known is when a complex organization transitions from one stage of a life cycle to another by overcoming the obstacles characteristic of the stage in which the organization finds itself. Moving from implementation to management and operations is one such transition.

Other positive setbacks serve as a test bed for developing better practices, whatever the stage the complex organization finds itself. Some setbacks are better thought of as design probes for whether that organization is on the “right track,” or if not, what track it could/should be on. In yet other circumstances, setbacks serve to point managers in the direction of things about which they had been unaware but which still matter.

To summarize, setbacks are positive in terms of their degree of importance and of the time horizon over which they are important:

–Not only can setbacks be positive in different ways, but to characterize them as positive means an organization or its managers are able to establish, in part, the expectations with respect to the setback events.

This means that changing expectations is key to managing setbacks. If you change present expectations about setbacks, you change the future of setbacks and their consequences. Easier said than done, however.

–There is more. While organizational or institutional setbacks unsettle what had been settled knowledge, what renders them positive is when they do so in ways that expectations do not undermine the assumption of organizational continuity.

By way of example, did the 2008 financial crisis served as a timely interruption to remind us how central regulators are to the continuity of the financial and credit systems? Did the crisis end up as a much-needed probe of how well the financial and credit institutions are keeping their sectors on track and under mandate? Was the 2008 crisis a test bed for more resilient or anticipatory strategies in credit lending and investing? Did the crisis in effect served as an obstacle, whose surmounting has been necessary to promote the operational redesign of the financial and credit sectors in more reliable ways?

Note the obviously mixed answers to any such questions do not necessarily reflect negative setbacks.

–With that as background, I now suggest that what is often called “the new normal” is much better described as a setback management that embraces the positive setbacks just mentioned.

If so, it seems to me three other “Normals” stand in the way of accepting a “new normal” as the management of setbacks, negative and positive:

• There is Normal Accidents Theory, which insists major accidents and system failures are an inevitable part of the tight coupling and complex interactivity of critical infrastructures. This however assumes setbacks cannot be managed, setbacks function primarily as precursors to disasters, and that operational redesigns cannot compensate for the effects of toxic design and technology.

• There is what the development scholar, Robert Chambers, calls “Normal Professionalism,” which points to a constellation of blind-spots to inter-unit cooperation. Blind-spots, however, are not just a source of weakness, but also of strength that comes with recognizing systemwide patterns and formulating localized contingency scenarios.

• There is also what sociologist, Diane Vaughan, identified as “Normalization of Deviance” in critical infrastructures. This social psychological phenomenon occurs, for our purposes, when anomalies that deviate from high reliability performance expectations are not interpreted as warning signs but become acceptable, routine and taken-for-granted aspects of performance for decisionmakering.

But Setbacks Are Normal; they are going on all the time in critical service provision and have to be operationally worked around and upon. Setbacks are ways we manage to take the world seriously when it comes to critical services that we cannot lose in real time.

Principal source: Part of this is an updated and revised section from my (2009), “Preventing Transboundary Crises: The Management and Regulation of Setbacks.” Review of Policy Research 26(4): 457-471.

Global Climate Sprawl

What should not have been so surprising: 
my error after error, recognized when appearing on the faces of others. 
                          Jane Hirshfield (from “I wanted to be surprised”)

–In her This Changes Everything: Capitalism vs. The Climate, Naomi Klein writes about the “painful reality behind the environmental movement’s catastrophic failure to effectively battle the economic interests behind our soaring [greenhouse] emissions”. She is not alone. Professor David Campbell writes: “The trillions which the developed countries have spent and plan to spend on mitigation have been and will be wasted. . .The failure of the collective brain of environmentalism to look this in the face will erode the goodwill which is its principal resource when its role in causing the immense waste becomes indisputable.”

To argue that the environmental movement—environmentalism writ large—has failed is a significant proposition, even if true only as far as it goes. After all, it was the environmental movement that helped articulate the crisis narratives for GCC. To label this, “failure,” is to argue that climate change is occurring because the recommendations of the environmental movement have not been implemented.

This “conclusion” has same ring of certainty that environmental movement recommendations have had. “The climate for the next several decades is set in concrete. . . .[T]here is nothing now to prevent those disastrous events,” an expert already told us a decade ago. Such certainty takes its force from being both determinism–“set in concrete”—and fatalism—disaster is unavoidable—at the same time.

–Surprise, in other words, has been exiled to another planet. This is not new. Go back to the 1990s to see “no surprise clauses” in habitat conservation plans. Here binding restrictions were sought that would leave the landowner or developer immune from further restrictions, should a threatened or endangered species be unexpectedly found on the property. But the unexpected is to be expected, notwithstanding no-surprise clauses. Why? Because to behave as if surprise can be eliminated is itself behavior that produces surprise.

–On the more positive side, then, to take such surprise seriously means, at a minimum, acknowledging and protecting those in and around the ecosystems of concern who, in managing already-existing surprises, also manage to improve ecosystem services and functions in the face of GCC.  That such efforts necessarily occur along case-by-case trajectories of fits and starts, some abandoned, others sustained for longer, is also to be acknowledged and understood.

–Which takes us back to that colossal waste of time and effort that Klein and like believers see in the efforts with respect to combatting GCC.

“Waste” is ambiguous, though. It’s just not that we often differ over what is “waste.” We can actually agree that the waste associated with GCC has been colossal, but differ over what its epic proportions entail.

By way of illustrating, I want to suggest GCC isn’t just a bad mess; it’s a spectacularly, can’t-keep-our-eyes-off-it, awful bad mess, and with implications not fully recognized.

–Let’s agree: GCC and its drivers are remaking a first-class Nature into world-class garbage truck. But why stop there in our description? Consider what many others have to say about the stunningly profligate human nature involved. You see the sheer excess of it all in Philip Roth’s rant about human nature from American Pastoral:

You get them wrong before you meet them, while you’re anticipating meeting them; you get them wrong while you’re with them; and then you go home to tell somebody else about the meeting and you get them all wrong again. Since the same generally goes for them with you, the whole thing is really a dazzling illusion empty of all perception, an astonishing farce of misperception. And yet. . .It’s getting them wrong that is living, getting them wrong and wrong and wrong and then, on careful reconsideration, getting them wrong again. That’s how we know we’re alive: we’re wrong.

This mind-boggling rush and excess of getting it wrong again and again and again—note too the “so” in the epigraph’s first line from Jane Hirshfield—has been neatly captured by many others. The elder statesman in T.S. Eliot’s eponymous play admits,

The many many mistakes I have made
My whole life through, mistake upon mistake,
The mistaken attempts to correct mistakes
By methods which proved to be equally mistaken.

The missing comma between “many many”—no surprise, right?—says it all, in my view: At the limits of cognition, we cannot pause, with words and thoughts sprawling over each other and piling up against a puzzled unknowability. (That the wildly different Philip Roth and T.S. Eliot are together on this point indicates the very real mess this is.)

That word, sprawl, is like that word, waste: full of yeasty ambiguity. Here is Les Murray’s more magnanimous view from his “The Quality of Sprawl”:

Sprawl is the quality
of the man who cut down his Rolls-Royce
into a farm utility truck, and sprawl
is what the company lacked when it made repeated efforts
to buy the vehicle back and repair its image.
Sprawl is doing your farming by aeroplane, roughly,
or driving a hitchhiker that extra hundred miles home…

This extravagance and profligacy are not ornery contrarianism solely. “[W]aste is another name for generosity of not always being intent on our own advantage,” poet Robert Frost wrote.

To my mind, Global Climate Change is the hot mess—both senses of the term—now sprawled all over place and across time. GCC is inextricably, remorselessly part and parcel of “living way too expansively, generously.” If I had my druthers, I’d rename it, “GCS:” Global Climate Sprawl.

Error and Safety

–A key virtue of operating within the shared comfort zone of team situation awareness in the infrastructure control room is knowing when it is an error to comply with a regulated task or technical protocol that, in the case at hand, would work against system reliability and safety. Correcting for errors is a key function of high reliability management in real time.

When operators are, however, pushed out of their comfort zone into unstudied conditions (say, by defective technology, policies or regulations), they find themselves unable to perform reliably there. Operators then perform under conditions where the identification of what is or not “error” defaults, ironically, to whether or not compliance mandated by the regulator of record takes place. “Sticking to procedure” ends in where there is no reliability followed then by “operator error,” which sets into play a perverse cycle.

Ritualized calls arise for foolproof technology, systemwide redesign, policies or regulations to correct for the mistakes. The effort becomes one of trying to macro-design micro-errors away, as if there were no middle domain of reliability professionals in real time. Macro-micro leaps of faith are lethal to systemwide reliability, we have repeatedly seen; they are, however, a permanent feature of calls for more regulation and policy.

–One upshot of the perverse cycle is that it’s a mistake to think all errors are mistakes. What needs to be distinguished is whether the errors/mistakes occur within or outside the control operators’ comfort zone. Tracking and responding to the differences are invaluable.

Why? Because many complex infrastructures we study treat uncertainty with respect to different types of errors as useful information. As Paul Schulman puts it, uncertainty isn’t the lack of information; it is itself a kind of information about where the socio-technical systems is in real time as a system:

In nuclear power plants, commercial aviation (including air traffic control systems), as well as other critical infrastructures, a distinctive form of error management has been a framework for high reliability. For these organizations the inverse of knowledge is not ignorance or uncertainty – it’s error. They identify and categorize uncertainty in relation to specific errors in decisions and actions they seek to avoid in order to preclude outcomes that are surrounded by not only organizational but societal dread.

–Yet for all these nuances, “error” continues to be treated as Bad in much of the literature on Safety Culture.

An analogy helps. The Roman Catholic Church had the early problem of how to treat Islam. It couldn’t be paganism, because Islam also held there to be one God and indeed shared notables, like Jesus and Noah. To make things fit, the Holy See declared Islam was not paganism but a Christian heresy, along the lines of Arianism or Socinianism, which questioned the Trinity or Jesus’ divinity.

So too today for that one great religion, Safety, with its one great heresy, “Operator Error.” Yea, though we all be fallible, operator error is bad, bad, bad. Even when operators don’t see it so; even when operators correct for forced errors all the time; even when they manage for error in their comfort zone. In other words, when really-existing error is not defined by dogma, matters become more usefully complex. People make mistakes and, yes, you can’t unring the bell once rung, but it’s always been more complex than that.

Principal source.

P.R. Schulman (undated). “Reliability, uncertainty and the management of error: New perspectives in the Covid-19 Era.” Unpublished manuscript.

Escaping from Hell Is a Right!

(You need at least 30 minutes for this entry.)

Set to music by Frederic Rzewski (pron. JEV-skee), the first part of his Coming Together is based on text from a letter of Sam Melville, anti-war protester and convicted bomber, who was incarcerated at Attica. He was shot and killed in the 1971 Attica prison uprising.

(You may have to pull the play bar back fully to the left; no proprietary claim is made to this link or material)

Part Two, ‘Attica,’ uses the reply of another uprising leader, Richard X. Clark, just after being release. Asked how it felt leaving Attica behind, he said: “Attica is in front of me.” Rzewski draws for me hope to and from those words.

Principal source

Grégory Salle (2018). “Escaping from Hell Is a Right!”: The Case of France’s ‘Q.H.S.’ (1975–1982).” Chapter 7 In: Prison Breaks–Toward a Sociology of Escape (eds. T.M. Martin and G. Chantraine), Springer eBook (https://link.springer.com/book/10.1007/978-3-319-64358-8).

Makes the gorge rise

Writes the economist, Paul Collier:

Overwhelmingly, the answer to why poor societies are poor is that they lack the organizations of modern capitalism. Capitalism has created organizations that harness the productivity potential of scale and specialization without triggering the alienation predicted by Marx. Marx thought that large-scale production inevitably separated enjoyment from labour, and that specialization “chained [man] down to a little fragment of the whole”. Ironically, the consequences of alienation were most devastatingly revealed by industrial socialism. Modern firms maintain motivation by a judicious combination of incentives and a sense of purpose: workers internalize the objectives of the firm. From the entrepreneur to the car park attendant, people get job satisfaction from what they do, not just from what they earn. Being subject to the discipline of the market, firms that fail to create such work environments go bankrupt.

Oh, and then he adds, “The Achilles heel of modern capitalism is the financial sector.”

Finding the good mess in supply and demand

–I admit to a fantasy about supply and demand curves, provoked by all those graphics like the one culled below from Wikipedia:

My fantasy is this: Imagine both demand and supply shifting downwards, with equilibrium price P* and quantity Q* shifting down with them. At some point, the two curves intersect the horizontal axis, producing three quantities, Qs < Q* < Qd.

Now, if you take a second look at the graphic, Qs is the quantity supplied even when price is zero. The equilibrium price, P*, becomes the price needed to move the quantity supplied from Qs to Q*. In this way, a portion of the quantity demanded is provided at no price because of, say, intrinsic motivation, or suppliers are confused, or everyone was just lucky. I’d like to think that is the good mess somewhere in every equilibrium analysis.

Principal source.

Frey, B.S. (1997). Not Just for the Money: An Economic Theory of Personal Motivation. Edward Elgar Publishing: Cheltenham, UK.

Better fastthinking in complex times

–The ability to think fast on one’s feet has always been at a premium. Indeed, fastthinking has been the order of the day for those policy types who operate under the ying and yang of crisis management and leaving important decisions to the last minute.

Clearly, fastthinking is not conventional trial and error learning, as the conditions for such learning––low environmental uncertainty, stability in goals and objectives, and persisting institutional memory––are missing from much of the relevant policy world. Nor is it the message-in-the-bottle (Flaschenpost) approach, where you do your research, throw it upon the still waters of hard drives, and hope that someone, somewhere, sometime, retrieves it and treat your message seriously.

fastthinking is just-in-time-thinking to match just-in-time schedules in just-interrupted task environments. That’s the upside. The downside is that timely feedback, prompt response and rapid adaptation are purchased by discouraging (more) deliberation and reflection. The common remedy recommended: Slow fastthinking down. Be deliberative. Think things through. But that’s the problem: We have less time to slow things down, and even less time to make the decisions.

–What to do then? Focus here on one principal effect of a fastthinking likely to stay around indefinitely: namely, the greater the pressure to take decisions now, the greater will be the pressure to rely on existing policy narratives. Where so, it seems obvious to me that the better policy narratives we rely upon have to become more complex.

In my view, a better policy narrative meets three criteria:

  • The narrative—its story with beginning, middle and end, or argument with premises and conclusions—is one that takes seriously that the policy or management issue is complex, uncertain, interrupted and/or conflicted.
  • The narrative is one that also moves beyond critique of limitations and defects of the reigning policy narrative (criticisms on their own increase uncertainties when they offer no better storyline to follow).
  • The narrative tells a better story than the reigning narrative(s). It gives an account that, while not dismissing or denying the issue’s difficulty, is more amenable or tractable to analysis, policymaking and management. Indeed, the issue’s complexity offers up opportunities to recast a problem differently and with it, potential management options.

–With that in mind, let me jump to the quick with two examples of what I mean by more complex policy narratives tailored to fastthinking, in this case in the environmental arena:

1. All major ecosystems are complex, and none more so than the planet as an entire ecosystem. Ecosystems are being managed so poorly, but there are ways to take action now in advance of results of long-term research, study and experimentation. Much more needs to be done to bring ecologists (including conservation biologists, climatologists, and hydrologists, among other natural scientists) into direct operations of large-scale systems. There, ecologists would not only be better positioned to undertake or promote long-term and large-scale studies and interventions, but more important provide real-time (a.k.a. fastthinking) advice for real-time problems affecting critical services, including but not limited to water and energy, based in ecosystem processes and services.

2. Think of advanced ecological management as utilizing authoritative websites, one of which might be http://www.ecological_management.org, maintained by, say, the Ecological Society of America [or other organization/country of interest].

An authoritative website provides sought-after, up-to-date and linked knowledge so quickly and reliably that it is continuously browsed by increasing numbers of users who click on the website early and often in their search for on-point information, in this case about ecology-based management. These websites do not pretend to provide final or definitive information, but rather seek to assure and ensure the quality of the topical information continually up-dated.

The website serves as a clearinghouse that encourages cross-checking and tailoring of information on ecological management, while also acting as a springboard for future information search and exchange. It is popular because it shortens the number of steps it takes to move from place to place in search of salient information.

In this scenario, the analyst or manager starts her analysis on ecology-based management by searching http://www.ecological_management.org. She goes to the website on the well-established principle that information becomes increasingly policy or management relevant when the people gathering the information are the ones who actually end up using that information. That is, the authoritative website is constructed and maintained to make searching and browsing easier for the policymaker herself.

Do such websites already exist for ecological and environmental managers (let alone for other major policy and management issues)? When it comes right down to it, do we find many real-time ecologists in infrastructure control rooms across the world?

Not “Why don’t they listen to us?” but rather: “What should we listen for from them. . .”

We want policymakers and politicians to treat our research and analysis seriously, but we rarely turn the cart around and ask: What more should we be listening for from them beyond the substance of what they are saying? It’s not just what they say but how they say it.

How would we identify those who talk as if they’d listen to what we have to say? How do we identify policy types where no amount of our research and analysis would ever be sought? In short, what are we missing that’s right in front of us as they articulate what they’re saying?

For me, two sets of positive statements stand out indicating the kind of receptivity to research and analysis we would like to hear from policy types:

“with respect to,” “under what conditions,” “this is a case of”. For example, it’s risks and uncertainties with respect to these failure scenarios and not those that we should be worried about. It’s under those conditions and not these that we take action. What we are talking about is something different, its being a case of . . .

“Here’s our track record…,” “Here are our measures of success…or failure”. Did what actually happened match what was originally proposed? Or, how does what actually happened compare to the success record of others in like situations? Or, what would have happened even had not the policy been implemented?

These statements (and variants) reduce to versions of “yes, but” or “yes, and,” and in so doing indicate the willingness and the ability of the speakers to identify differences that matter for policy and management.

What, though, about the negative statements to be listened for? Am I the only one who trembles when some senior government officials says of a particularly tricky state of affairs, “We need to clear the table and make a fresh start“? Dangerous dumbing down is occurring when you hear this and the like from policy types:

–“It’s a win-win, so who can be against it?” (when everyone within hearing distance knows winners rarely if ever compensate losers), “We just need the political will” (when obviously we’ve had too much political will in committing to any and everything), “If implemented as planned” (when the entire point is you cannot assume any such thing); and

–“It’ll pay for itself” (when costs, let alone benefits, can’t be measured, aren’t evenly distributed nor even collectively borne), “We must do this at all costs” (when what the policy types are really doing is refusing to tell you the likely ones), and “Failure is not an option” (when failure is always a very real possibility in complex situations).

And yes, we did better in the Marshall Plan, the Moon Landing, or other Standalone. But there are no guarantees that “just because” we did that once, we’re able to do it for an entirely different type of problem, like eliminating racial discrimination or income inequality. Instead, what we want to hear from policy types is, “Here’s what to do even now. . .

It’s no one else’s responsibility but ours to sharpen our skills in listening-out-for when it comes to policy talk. The duty is to listen out for those willing and able to dial in details for the very different answers to: What do we know? What should we do? What can we hope?

(Special thanks to Paul Schulman in thinking through and wording some points.)

Killing cognitive reversals

What else can we do, the senior executives and company boards tell themselves, when our entire business is on the line? We have to risk failure in order to succeed.

But what if that business is in a critical service sector? Here, when upper management seeks to implement these risk-taking changes, they rely on middle-level reliability professionals, who, when they take risks, only do so in order to reduce the chances of failure. To reliability-seeking professionals, the risk-taking activities of upper management look like a form of suicide for fear of death.

–When professionals are compelled to reverse practices they know and found to be reliable, the results are deadly. In the Challenger accident, engineers had been required up to the day of that flight to show why the shuttle could launch; on that day, the decision rule was reversed to one showing, “beyond a shadow of a doubt,” why launch couldn’t take place.

Once it had been good bank practice to hold capital as a cushion against unexpected losses; new capital security arrangements mandated that they hold capital against losses that must be expected from their high-risk lending. Also contributing to the 2008 financial meltdown was that mortgage brokers traditionally made money on the performance and quality of mortgages they made; then their compensation changed to one based on the volume of loans originated but passed on.

The Deepwater Horizon rig had been drilling an exploration well at the Macondo site; that status changed when on April 15 2010 BP applied to the U.S. Minerals Management Service (MMS) to convert the site to a production well, a change approved by the MMS. The explosion occurred five days later.

–In short, there is ample evidence that reversals of important decision rules that require professionals in high-stakes situations to turn inside out the way they had managed for reliability have led to system failures and more: NASA was never the same; we are still trying to get out of the the 2008 financial mess and the Great Recession that followed; the MMS disappeared from the face of the earth.

Forcing cognitive reversals on the part of reliability operators and operators—that is, exile them to conditions they do not know but are told they must nonetheless be skilled for—is the surest way to throw acid into face of high reliability management.

–“But,” you counter, “that’s a strawman. Of course, we wouldn’t deliberately push reliability professionals into unstudied conditions, if we could avoid it.”

Really?

The often-heard and oft-recommended approach, Be-Prepared-for-All-Hazards, looks like the counsel of wisdom. It however is dangerous if it requires emergency and related organizations to cooperate in ways they currently cannot, using information they will not have or cannot obtain, for all manner of interconnected scenarios, which if treated with equal seriousness, produce considerable, if not massive modeling and analytic uncertainties.

An all-hazards approach, if actually implemented, pushes professionals mandated to be reliable into having “to operate a critical infrastructure in prolonged unstudied conditions,” a management contradiction if ever there was one.