Killing cognitive reversals

What else can we do, the senior executives and company boards tell themselves, when our entire business is on the line? We have to risk failure in order to succeed.

But what if that business is in a critical service sector? Here, when upper management seeks to implement these risk-taking changes, they rely on middle-level reliability professionals, who, when they take risks, only do so in order to reduce the chances of failure. To reliability-seeking professionals, the risk-taking activities of upper management look like a form of suicide for fear of death.

–When professionals are compelled to reverse practices they know and found to be reliable, the results are deadly. In the Challenger accident, engineers had been required up to the day of that flight to show why the shuttle could launch; on that day, the decision rule was reversed to one showing, “beyond a shadow of a doubt,” why launch couldn’t take place.

Once it had been good bank practice to hold capital as a cushion against unexpected losses; new capital security arrangements mandated that they hold capital against losses that must be expected from their high-risk lending. Also contributing to the 2008 financial meltdown was that mortgage brokers traditionally made money on the performance and quality of mortgages they made; then their compensation changed to one based on the volume of loans originated but passed on.

The Deepwater Horizon rig had been drilling an exploration well at the Macondo site; that status changed when on April 15 2010 BP applied to the U.S. Minerals Management Service (MMS) to convert the site to a production well, a change approved by the MMS. The explosion occurred five days later.

–In short, there is ample evidence that reversals of important decision rules that require professionals in high-stakes situations to turn inside out the way they had managed for reliability have led to system failures and more: NASA was never the same; we are still trying to get out of the the 2008 financial mess and the Great Recession that followed; the MMS disappeared from the face of the earth.

Forcing cognitive reversals on the part of reliability operators and operators—that is, exile them to conditions they do not know but are told they must nonetheless be skilled for—is the surest way to throw acid into face of high reliability management.

–“But,” you counter, “that’s a strawman. Of course, we wouldn’t deliberately push reliability professionals into unstudied conditions, if we could avoid it.”

Really?

The often-heard and oft-recommended approach, Be-Prepared-for-All-Hazards, looks like the counsel of wisdom. It however is dangerous if it requires emergency and related organizations to cooperate in ways they currently cannot, using information they will not have or cannot obtain, for all manner of interconnected scenarios, which if treated with equal seriousness, produce considerable, if not massive modeling and analytic uncertainties.

An all-hazards approach, if actually implemented, pushes professionals mandated to be reliable into having “to operate a critical infrastructure in prolonged unstudied conditions,” a management contradiction if ever there was one.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s