–E.H. Carr, the British historian, advised his students that “before you study history, study the historian.” We too know the subjective individual is never far away from risk and uncertainty. Risk and uncertainty can also be historicized as formations of their time and place when you take the longer view. (Not only is your risk not mine; 19th century uncertainty looks very different from 21st century versions.)
That said, acknowledging the historical, social, cultural, economic…basis of our knowledge about risk and uncertainty has rarely gone far enough when it comes to policy and management.
–For, there is the corollary: Humans can only know—really know—that which they create. (Such is the insight of St. Augustine for philosophy, Giambattista Vico for history, Roy Bhaskar for science….) Humans know mathematics in a way they cannot know the universe, because the former is a thoroughly human creation about which more and more can be made to know. Their uncertainties are socially constructed in a way that, for lack of a better word, “unknowledge” about the universe is not.
This corollary means that to accept that “Risk and uncertainty are socially constructed concepts easily historicized” needs to be pushed further.
What is missing are the details of the connections among risk, uncertainty and associated terms that we make and the meanings we draw out for these connections, often under conditions of surprise. (Lord Curzon was so surprised when watching soldiers bathing during WWI: ‘I never knew the working classes had such white skins!’) Our creations are always surprising us and we seek to explain these occurrences by means of analogies that extend the range of what we call knowledge.
In case it needs saying, terms like “system,” “failure scenario,” “with respect to” and more discussed in the other blog entries on risk and uncertainty are also rooted in time and place. But to stop there, again, stops short of the wider point: That which we have created by way of risk and uncertainty—and continue to create—has become very complex. In fact: so complex as to continually provoke more thinking and more complexity-as-knowledge.
–Here by way of example are three such complexities about risk and uncertainty in policy and management that arise solely because of the connections made and meanings given by way of analogies:
• The focus on present risk and uncertainty in critical infrastructure—at the component or system levels—in an odd way volatilizes the infrastructure’s longer-term. It’s as if the preoccupation with current risk and uncertainty hammers the longer-term into current notions of risk and uncertainty as well.
This tenderizing of today’s meat we call “the longer-term” is not altogether unreasonable, of course: Longer terms are described as full of uncertainty and risk. But that is not the only way we frame “longer-terms.” Think Ben Franklin: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.” Or: Intelligent people find it no problem to use their minds to subvert reason. In either view (two among many), it’s our Bayesian brains (more generally, their “predictive processing), not their “uncertainties,” that are the problem.
• Second, the methodological computation of “risk numbers” or “risk scores,” say, by asset category or for different components transforms infrastructure risk and uncertainty into a singular event—if you will, a single display of what is an ensemble of the heterogeneous and contingent (qualitative and quantitative information, by these and not those subject matter experts…), here but not elsewhere, and at this point in time and not another.
Put this way, risk rankings look like performance pieces in the arts. Over the course of the day, the chief risk manager serves as the curator of installation artifacts called risk scores for this or that part of the critical infrastructure—for facilities here, for pipelines there, for compressors all over the place. Each ranking and each of score is a one-off, akin to what Surrealists call frottage, producing impressions on a piece of paper by rubbing with a pencil or crayon—think: rubbing with a methodology—over an always uneven surface.
(Or if you don’t like the art analogy, think of risk scores akin to novel financial instruments, such a customized CDOs, each a one-off, so complex and heterogeneous they can’t be compared except in highly nominalized terms like price, i.e., only in terms of a “score”.)
• Third, each custom-made risk score or a ranking of scores ends up as an odd kind of abstraction. Risk is abstracted from real-time operations into risks associated with specific assets, components and processes that by definition do NOT add up to the infrastructure as a system operated and managed in real time. Once outside the precincts of real time and the system as system, the temporal and spatial are abstractly foreshortened and elongated—the system is spatially segmented into components and extended elastically as the focus of attention—for almost everything except that system as a system.
Since attention is a scarce resource, prolonged attention given to abstracted risks at the asset, component and process level diverts resources away from systemwide reliability and the distinct risks faced in real-time management there. This matters because that real-time management for systemwide reliability in its centralized control room is about the only place in the infrastructure we observed that didn’t lose sight of the infrastructure operations as an articulated whole.