More on: control; differentiation; knowing and its borders; dread and betterment; crises; revolts; not-knowing; lifecycle modeling; yes-but and yes-and; coordination; interconnectivity; and keeping it simple (from earlier blogs)
More on control
Like the poverty premium, where poor people have to pay more for key services (insurance, credit, energy, shelter), people seeking full control of uncertain task environments pay a “control premium”: Control strategies cost them—and us of course—more than would be the case were they able to cope ahead or manage the uncertainty. When their control excesses make the lives of others difficult or worse, this isn’t an externality to be corrected by taxing them or having the rest of us bribe them to become better uncertainty managers. Instead, their controlling behavior shifts the costs onto us. They might as well be demanding money with menaces from us.
Here’s a different analogy to reinforce the point. Compare algorithmic decisionmaking (ADM) and the current technology for gene editing known by the acronym, CRISPR. When it comes to ADM, the worry is that we don’t know how the algorithm works. What’s happening, we ask, because of the cultural biases imported via the original data into the algorithm? As for CRISPR, the worry is that, even when we know that this rather that gene is being edited, we’re still not sure it’s the right thing to do.
Suppose we had a CRISPR for ADM, i.e., we could go into the algorithm and excise cultural bias. But even then we’d worry about, e.g., what is bias to some is not to others. For that matter, is there any doubt whatsoever that a new mechanism promising greater control in addressing one worry won’t produce another worry, equally if not more important? Control cannot answer the questions control poses.
So what? It’s hard to believe, for example, that all the talk about artificial intelligence (AI) “controlling” behavior will not need to be far more differentiated and contextualized, when it comes to really-existing policy and management implications. Consider underwater oil and gas exploration. Alarms produced by autonomous systems can and do often turn out to be false alarms occurring under already turbulent task conditions at sea. Indeed, operating at a higher level of autonomy and having to cope with indiscriminate false alarms may no longer permit the real-time operators to revert, just-in-time, to lower levels of autonomy, e.g., managing via more manual operations, as and when nothing else works in the context under consideration.
More on differentiation
When I and others call for better recognition and accommodation of complexity, we mean the complex as well as the uncertain, unfinished and conflicted must be particularized and contextualized if we are to analyze and to manage case-by-granular-case.
When I and others say we need more findings that can be replicated across a range of cases, we are calling for identification not only of emerging better practices across cases, but also of greater equifinality: finding multiple but different pathways to achieve similar objectives, given case diversity.
What I and others mean by calling for greater collaboration is not just more teamwork or working with more and different stakeholders, but that team members and stakeholders “bring the system into the room” for the purposes of making the services in question reliable and safe.
When I and others call for more system integration, we mean the need to recouple the decoupled activities in ways that better mimic but can never reproduce the coupled nature of the wider system environment.
When I and others call for more flexibility, we mean the need for greater maneuverability across different performance modes in the face of changing system volatility and options to respond to those changes. (“Only the middle road does not lead to Rome,” said composer, Arnold Schoenberg.)
Where we need more experimentation, we do not mean more adaptive learning, when the systemwide error ends up being the last systemwide trial destroying survival.
While others talk about risks in a system’s hazardous components, we point to different systemwide reliability standards and only then, to the different risks and uncertainties that follow from different standards.
More on knowing and its borders
If we start with the commonplace that analysis and deliberation center around what is known or not, then the boundaries of the known blur not only into the unknown, but also into the preknown. The latter is the preexisting knowledge that one is born into and “takes for granted.”
In his essay, “The Well-Informed Citizen,” Alfred Schütz, the sociologist, describes it this way:
The zone of things taken for granted may be defined as that sector of the world which, in connection with the theoretical or the practical problem we are concerned with at a given time, does not seem to need further inquiry, although we do not have clear and distinct insight into and understanding of its structure. What is taken for granted is, until invalidation, believed to be simply “given” and “given-as-it-appears-to-me”–that is, as I or others whom I trust have experienced and interpreted it. It is this zone of things taken for granted within which we have to find our bearings. All our possible questioning for the unknown arises only within such a world of supposedly preknown things, and presupposes its existence.
One consequence of ignoring the preknown, known and unknown have blurred borders is this: We end up acting as if it does not matter that it takes preknowing and knowing enough to avoid entering into the unstudied conditions of the unknown. If Schütz is right, the preknown is where we “find our bearings” with respect to the known and unknown.
What does this mean? It turns out that all this talk about “unintended consequences of human action” is itself unintentionally simplistic. “Unintended,” when the preknown is an invisible platform that lets us find our bearings so that other factors in the known and unknown carry the weight of argument about “unintended consequences”? “Consequences,” rather than intercalated knowing, preknowing and not-knowing chalked up to contingency and exigency?
“Unintended consequences of human action” is a coherent phrase only by missing the rest of that overwritten palimpsest, called “human action,” off of which the phrase is cobbled together and read.
More on dread and betterment
Widespread fear and dread, so criticized by 18th century Enlighteners, have positive social functions that serve Enlightenment goals of bettering human conditions.
The large-scale systems for betterment—whether defined around markets at one end or social protections at the other—are managed in large part because of widespread societal dread over what happens when they aren’t managed reliably and safely. Critical infrastructures for energy, water and healthcare (among others) are so essential that they mustn’t fail, even when (especially when) they have to change. That they do fail, and materially so, increases the very real sense that it’s too costly not to manage them.
We of course are meant to wonder at the perversity of this. But that is the function of this dread, isn’t it? Namely: to push us further in probing what it means to privilege social and individual reliability and safety over other values and desires. We are meant to ask: What would it look like in world where such reliability and safety are not so privileged?
For the answer is altogether evident: Most of the planet lives in that world of unreliability and little safety. We’re meant to ask—precisely because the answer is that clear. Hunting and gathering societies may be the most sustainable for the Anthropocene, but I do not remember any hunter-gatherer in Botswana in the early 1970s who didn’t want to quit that that way of life for something safer and more reliable.
More on crises
But, you insist, what’s happening today are global crises for which we do not have deep knowledge or skills or better practices. Quite the opposite, you press: Exceptional circumstances give rise to extraordinary threats and thus to emergency measures which necessarily end up as precedents for first-ever policies.
I suggest you might want to think more about the italicized terms, as each puts you (and us) at the very limits of human comprehension, infrastructure reliability and hazard management, and the societal values driving policy, management and their regulation. To insist as many are doing that climate change, for example, is uniquely global, uniquely long-term and uniquely irreversible is to assert it is uniquely ununderstandable.
For any conclusion that these are unprecedented times in altogether uncharted waters is itself the artefact and by-product of having no default option when at the limits of thinking and comprehending the way we do. Existential threats call for all manner of response, some of which are well beyond those confined to analysis and management.
One under-acknowledged response is appealing to the background condition for taking action when analysis and management are confronted by the incomprehensible or unpredictable. Humans have always been many-sided, and so must our responses be, where that background condition of having many sides inherently frames the action we take. The challenge is to disclose those other sides with which to make the issue more tractable to analysis and management (no guarantees here).
Conventional risk analysts and crisis managers are quick to counter: “What do you mean we are one-sided? Good managers and analysts are always looking at the many sides of an issue and, in fact, we pride ourselves in seeking to bridge incompatible positions—and never more so than when the prospect of disaster raises the stakes!”
But there is no “middle” to bridge or compromise over when you and we are at or beyond the limits of comprehension; you have to default to something other than analysis or management as usually understood, if only to recast the intractable into something more tractable so as to re-engage analyzing and managing.
More on revolts
For Gotthold Ephraim Lessing, 18th century German Enlightener, the point is not for the sculptor or painter to portray a violent event at its climax, when visualizing a single moment. Better to choose a moment before or after the apex of destruction, so as to allow the viewers’ imaginations freer rein over what is to come. That way, Lessing argues, the narrative continues in an arc of reflection that is not cut short by any climax’s overpowering intensity:
since the works of both the painter and the sculptor are created not merely to be given a glance but to be contemplated. . .it is evident that the single moment and the point of view from which the whole scene is presented cannot be chosen with too great a regard for its effect. But only that which allows the imagination free play [freies Spiel] is effective. The more we see, the more we must be able to imagine. And the more we add in our imagination, the more must think we see. In the full trajectory of an effect, no point is less suitable for this than its climax. There is nothing beyond this, and to present to the eye what is most extreme is to bind the wings of fancy and constrain it, since it cannot. . .shun[ ] the visible fullness already presented as a limit beyond which it cannot go.
Rather, the moment chosen should be pregnant—fruitful, suggestive—of possibilities that are not foreclosed because imagination has been arrested by catastrophizing the worse. Instead of picturing Ajax at the height of his rage and slaughter, better he be depicted afterwards in the full realization of what he has done and in the despair leading him to what must come next.
One problem with today’s crisis scenarios of a violent Anthropocene is a preoccupation with a visualized climax. Obviously, post-apocalypse can be pictured as even deadlier. But the point holds: In today’s scenarios, the worst is imagined and imagination stalls there—like shining deer at night—with the glare of it all.
But the truth of the matter is that before or after the climax, thought (not just imagination) is still at work. Before, in the sense of thinking about the roads not taken; after, in terms of the what-ifs ahead. In fact, today’s unrelieved stream of crisis scenarios is itself proof of imaginations’ inability to let a prophesied climax do all the talking.
Where does this leave us?
Basically, it’s better to focus on crisis and catastrophe before or after they have happened rather than to be in the grip of their climaxes. For all we know, the Occupy Movement, Yellow Vests Movement, Hong Kong protests, the Extinction Rebellion and more were the apex of reaction. Any disappointment that one or more have not culminated into revolution or other “far-reaching substantive change” is one scenario only—which on reflection may not be the most fruitful, suggestive moment to focus on anyway, let alone be overawed by. The whole purpose of revolt may be revolts (in plural).
More on not-knowing
Assume you know nothing of the brain’s structure or neuroscience. Assume then your brain is a chamber initially holding two kinds of spaces: filled spaces of what you know and empty spaces for what you do not know. Suppose, also, that at times each filled space emanates a beam of bright light that, when combined with beams of light from the other filled spaces, produce a brilliance so intense in the brain that the only shapes left visible are the dark cavities that this concentrated light did not reach.
Suppose the reverse also happens (this proposed famously by psychoanalyst, W.R. Bion): Each empty space emanates at other times a penetrating beam of darkness so absorbing that, when combined with the blackening beams from other empty spaces, the only shapes left visible are the lighted cavities the dense blackness did not reach.
Now step back and consider. Think of the dark cavities that persist even in the lighted glare of what your brain knows as what it really doesn’t know, while the lighted cavities that persist in the blackness of what your brain doesn’t know are what it actually does know.
Compare now the two sets: initially, filled/empty and afterwards, lighted/darkened. The archipelago of densely lighted and densely dark need not correspond to the original filled and empty spaces. That is, your brain thought it knew some things which it now sees it didn’t know; and some of what it thought it didn’t know is shown now to be what it knew all along.
This thought experiment suggests that our brains, in order to move from “not-knowing” to “seeing the unknown” requires at least moving from what we thought we knew or didn’t (those filled and empty spaces) closer to what we actually do and do not know (its cluster of lighted and darkened cavities).
If so, then this is the question: Why would anyone believe that you can shift from looking onto unknowns without knowing they are there (the notorious unknow-unknowns) to seeing unknowns in the Anthropocene and knowing it, if you have not demonstrated beforehand the realization that you didn’t know what you thought you knew, you did know more than you initially thought, or both? A track record in doing so, combined with the risk/uncertainty discriminations in earlier sections, are key to developing new policy optics for the Anthropocene.
More on lifecycle modeling
Say you are involved in modeling the lifecycle of a threatened or endangered species. You and your colleagues rightly start out ambitious by aiming to develop and then integrate sub-models for species reproduction; movements between areas; and mortality rates. It doesn’t take much time, however, to confirm that not only do pertinent data not exist, but modeling uncertainties and errors work against integrating the sub-models into a comprehensive lifecycle model (LCM).
Thereafter with more time and funding, you and your colleagues develop much reduced versions—LCM1, LCM2 and currently LCM3—each bringing to light further refinements and significant methodological and data issues. You embark on developing LCM4 in the hopes that the research team—funding always being an issue—is moving closer to identifying management interventions for the species. The many technical reports (now approaching 50 in number) produced during the years of research track the refinements, improvements, insights and difficulties in modeling species reproduction, movement and survival rates. The peer-reviewed literature on lifecycle models has, however, been advanced in the view of many outside experts by team’s research.
Unfortunately and for various unexpected reasons, none of the reports identify modeling and data uncertainties in a way that they can be contrasted to the uncertainties and errors made in the existing comprehensive model for managing said species.
What “comprehensive model,” you ask? Didn’t I just say there was no comprehensive lifecycle model? It turns out that, during the years of the modeling research, real-time interagency staff and scientists continued making decisions for the management of the species. In effect, they served as platform for integrating research findings and other evidence for real-time decisionmaking. Humans, the quintessential soft factors, are the only “integrated comprehensive model” we will have for some time to come wherever in the Anthropocene.
It’s true that from time to time the consequences of these interagency management actions found their way into a technical report, but even here modeling uncertainties took priority over management: “Though it is tempting to interpret declines in estimated [mortality] as evidence of management success, models of population dynamics are required to disentangle. . .
You’d think that the burden of proof has been on the modelers to demonstrate that reliance on lifecycle models would lead to better results compared to the next best alternative of interagency deliberations of scientists and support staff. Researchers say we’re not there yet.
But, not to worry: The judge who mandated the research in the first place asserted way back when: “All experts agree that application of a lifecycle model is the accepted method for evaluating the effects of an action upon a populations growth rate.” This means all we need do is assume management isn’t improving faster than the modeling. And what could make more sense in reality than doing what is so needed in theory?
More on yes-but and yes-and
A great deal of US politics and policy is caught up in the yes’s and no’s of pros versus cons, advantages versus disadvantages, and costs versus benefits. But there has never been consensus on making these their own either/or. Lionel Trilling, the literary critic, famously said of 19th century American writers “they contained both the yes and the no of their culture”. For Robert Frost, the poet, neither exists in its own right—“yes and no are almost never ideas by themselves”.
The ethnographer and writer, Michel Leiris, writes about the need “to merge the yes and the no.” “Between yes and no” is the title of an early essay by Camus. Nietzsche “said no to life as long as it was easy, but yes when it took on the form of the impossible”. The work of Elizabeth Bishop was “perhaps more a quiet no than a great big yes,” according to another poet. More severe, “Herman Melville praised Nathaniel Hawthorne: ‘He says NO! in thunder; but the Devil himself cannot make him say yes. For all men who say yes, lie’”, records the critic, Christopher Ricks, who then asks: “But what about saying, ‘Yes, but…?’”
Ricks is spot-on. In the same way as dark energy and dark matter are said to make up the vast portion of the universe, politics, policy and management are grasped only because—not in spite—of the not-knowing, difficulty and inexperience, all around and in between.
A character in Roberto Bolaño’s The Savage Detectives asks: “If simón is slang for yes and nel means no, then what does simonel mean?” That is difficult to answer, Bolaño describes:
And I saw two boys, one awake and the other asleep, and the one who was asleep said don’t worry, Amadeo, we’ll find Cesarea for you even if we have to look under every stone in the north…And I insisted: don’t do it for me. And the one who was asleep…said: we’re not doing it for you, Amadeo, we’re doing it for Mexico, for Latin America, for the Third World, for our girlfriends, because we feel like doing it. Were they joking? Weren’t they joking?…and then I said: boys, is it worth it? is it really worth it? and the one who was asleep said Simonel.
Bolaño’s translator (Natasha Wimmer) asks, did this Simonel mean “Absolutely”? For my part, I’d like to think simonel insists “yes” and “no” matter when followed by “but” or “and,” the first as a caution and the second as encouragement. To admit this is to be open to alternatives when yes and no aren’t: “Yes or no; or provide an alternative,” as emergency managers are put it.
More on coordination
I come from a policy analysis and management training with little good to say about calls for “more effective coordination.” When having nothing more to say but feeling compelled to recommend something, then comes the “what we need is more effective coordination.” Who can be against effective coordination? Though called for without a tincture of what to do, step by step and in real time. Like gold in seawater, coordination is impressive, but pointing that out is of scant use.
I’m not the only one who hesitates reading further when the document gets to the part where death and disaster are credited to “the lack of coordination.” When I read criticisms that blame deaths or injuries in a disaster on the “lack of coordination,” I expect to see answers to two immediate questions: (1) can it be demonstrated that the lack of coordination did not arise because the responders knew—or thought so at the time—that they were undertaking activities just as urgent; and (2) can we conclude that the event in question would (not could, should, might or perhaps) have been better responded to had it not been handled the way it was (the classic counterfactual)? Rarely, I find, are answers even attempted, let alone provided. (The counterfactual often has a twofold would. The sociologist, Raymond Aron, ask critics of decisionmakers: “What would you do, in their place, and how would you do it?”)
Such detail is of course difficult to summon, but that it is so rarely attempted leaves us to wonder just whose inexperience is revealed—the responders criticized or the callers for more coordination. Obviously, the more detailed the future scenario, the more likely it will fall short of what actually happens in the face of so much contingency; but how else to face uncertainty in the future than to demand details? For that matter, predicting the future is difficult precisely for the same reasons learning from the past is: Both require stability in objectives, institutional memory, multiple reserves in case something goes wrong, and low environmental uncertainty, among others. Should the obvious need restating, we already knew this even before acknowledging we’re in the Anthropocene.
More on interconnectivity
If there were ever a term in need of greater differentiation, granularity and detail, it is “interconnected” (as in interconnected critical infrastructures).
Our research on a Vessel Traffic Service (VTS) of the US Coast Guard (USCG) found at least five major kinds of “interconnected” at work having sharp differences in the VTS’s real-time operations:
- Interoperability: Like the textbook interoperable energy utility (where electricity is crucial for the natural gas operations and vice versa), the VTS manages both vessel traffic and the regulated waterways that the vessels use (where managing the water ways affects management of the vessels and vice versa);
- Shared control variables: Water flows are a major control variable not just for VTS navigation purposes, but also for other infrastructures (most notably large water supplies and hydropower systems). This means that unexpected changes in how one infrastructure manages water flows can affect the management of the water flows by the other infrastructures (indeed, inter-infrastructural coordination around shared control variables was reported to us);
- Combined cycle of infrastructure operations: The USCG has a range of missions and operations, two of which are the VTS and the SAR (Search and Rescue) units. VTS combines with SAR to represent stages of this infrastructure’s operational cycle—normal operations and disrupted operations (VTS) along with failure and recovery (SAR). Not only are normal operations of the VTS already inter-infrastructural (by virtue of the shared control variables), but also the USCG’s Command and Control mission, including that for SAR, has an incident command facility and function for inter-infrastructural coordination during system failure and recovery;
- Variety of real-time configurations of interconnectivity: The VTS manages by virtue of resorting to a variety of interconnections with the vessels concerned. When VTS management of a common pool resource (the waterways) on behalf of inter-related users is disrupted or fails (e.g., because of defect in VTS communications), the interconnection configuration defaults over to the reciprocal one of vessel-to-vessel communication; and
- Inter-organizational linkages: USCG operations, including a VTS, are not only linked with other infrastructures through reliance on the Global Positioning System (GPS), but the Coast Guard’s position within the Department of Homeland Security makes it strategically located with respect to focusing on GPS vulnerabilities and strengths when it comes to the nation’s cyber-infrastructure.
Further detailing is easy to sketch, but the point remains: Once differentiated interconnectivities are taken as the serious, really-existing starting point, we better understand how some major approaches to risk management of critical infrastructures can be so misleading.
What could seem more reasonable, for example, than a focus on system chokepoints when it comes to risk assessment and management at an interconnected critical infrastructure level? And the most obvious way to do that is by focusing the attention on where major infrastructures intersect or are adjacent to each other, correct? Wrong.
It’s wiser is to focus on how spatially adjacent or collocated structures and facilities are actually managed within their respective infrastructure systems. It is possible that a system’s chokepoint may be elsewhere than at any site of collocated facilities, and that the element collocated could be lost without its respective system flipping into failure. Just because elements from two or more infrastructures are spatially adjacent does not mean automatically mean those infrastructures have “to coordinate” unless, say, shared control variables are involved or interoperability challenged.
A huge category mistake is committed when conflating (1) the unfolding and interrelated consequences on life, property and markets of, say, a hazardous liquids pipeline explosion on adjacent populations and property and (2) the explosion’s consequences for the interconnected critical infrastructures for those hazardous liquids, in terms of not just these pipelines and associated refineries, but also just as significantly the electricity and water infrastructures that the former depends upon in real time.
Why does differentiating the two “systems” matter? To equate the relevant system definition with the spread and interaction of knock-on population-and-property consequences of failure (Cf) is to identify as a problem the lack of systemwide management of Cf. But the infrastructures are anything but unmanaged in real time, including their interconnections.
More on keeping it simple
Again: Not to worry, we’ll scale up later. Later on, presses the happy-talk, we’ll relax assumptions and add realism. Don’t bother the details; we know how to reduce overpopulation (just don’t have babies!) and save the environment (just don’t cut down the trees!). Just keep fossil fuel in the ground, now! So much of the just-do-this! suffocates in its own fat: This time it’s different; we do know where to start; leave the complications for later.
The problem with “start simple and then scale up” is that each scale/level is complex in its own right, regardless of their interconnectivities. The map smooths out a fractal shoreline; to start simple and scale up makes as much sense as trying to pinpoint the shoreline through the eye of a needle. Yet during my policy analysis career I’ve witnessed the 20-page policy brief reduced to the five- page memo into a fifteen-minute PowerPoint presentation into the three-minute elevator speech into the tweet and now the search for just the right meme. This too, they said, was keeping it simple. What next on the syllabus: Telepathy? The knowing look in 10 seconds or less? But, not to worry: Just Keep It Simple. No problem pissing on your shoes: It’s only a light shower! Wrong. We make cuckoo clocks without the bird-shit, but that’s about the extent of it. Complex, again, is about as simple as it gets these days.