The Achilles’ heel of high reliability management

The weakest point in risk and uncertainty management by critical infrastructures operating under high reliability mandates is any assumption that the infrastructures aim to ensure their users need no longer worry about risk, uncertainty and failure scenarios for the service provided.[1]

Why? Because the considerable strengths of control rooms are at the same time blind-spots for society’s expectations of them.

Yes, control rooms represent unique system knowledge, but that real-time knowledge is difficult to convey to or distill for the public, let alone experts committed to checklists and protocols. Yes, their skills and requirements are so knowledge-intensive as to make control operators professionals in their own right, but that also means they cannot be expected to know the requirements of other control rooms with the same degree of breadth and depth. Yes, reliability professionals are virtuosi in managing real time (and it is true that professionals who cannot manage the short-term should not be expected to manage for the long term), but reliability professionals are the first to recognize the need for more long term planning and analysis.

Yes, the evolutionary advantage of control room operators to operationally redesign defective technology and regulation so as to ensure system reliability in real time is often under-recognized, but this does not make reliability professionals experts in altogether repurposing infrastructures when it comes to adding new services or instituting new infrastructures to provide the same service. Yes, control rooms are central to intra- and interinfrastructural reliability, but some critical infrastructures under very real mandates for high reliability do not have control rooms.

Yes, there is that sense in which a control room is like the weather vane taking all those lightning strikes to protect the house underneath, but that means control operators must be able to absorb the shocks and be protected in doing so. This protection—along with the longer term perspective and interinfrastructural oversight responsibilities—is what we expect from leaders, regulators and policymakers. Paul Schulman and I have highlighted control rooms as unique organizational formations and social institutions in their own right, meriting society protection, even during (especially during) continued attack.


[1] Oops. “As the economist Gary Gorton has put it, banking does for the nonexpert in finance what the electricity grid does for the nonexpert in electricity: . . .enabling most who ultimately bear the risk not to have to worry about it on a day-to-day basis. But as we discovered once the [2008 financial] crisis broke, it was not just nonexperts who had stopped worrying about risk on a day-to-day basis. Most professional investors had also gotten into the habit of not worrying about it either. . . Before we rail against their stupidity, we should remember that not worrying about risk is precisely what a modern banking system enables its customers to do. That the lack of worrying had gone too far is now undeniable, but it happened precisely because of how impressively the modern banking system works when it is working well.” (Paul Seabright in Foreign Policy accessed online on May 26, 2018 at http://foreignpolicy.com/2011/01/03/the-imaginot-line/).

Public Policy Analysis, c.1970 – c.2020: In Memoriam?

–I graduated with a master’s in public policy studies from the University of Michigan in the early 1970’s and with a PhD in public policy from the University of California Berkeley in the later 1980’s. Pretty much at the start of my career, I promised myself I’d never be one of the WhenWee’s, that expatriate tribe who patter on about “how things were better back when.” So chalk up the following less as a valediction and more as an anticipation.

The rise in the haute vulgarization of “wicked policy problems” is only the tip of the problem. There’s little understanding, it seems to me, that labeling a policy issue wicked can over-complexify a problem that would otherwise be open to recasting into more tractable forms without loss of its obvious complexity. To put it another way, the gap is widening between the increasing sense of policy intractability and the sense all along that problems are complex and that this very complexity affords multiple opportunities to recast/redescribe/revise the problems more manageably.

–Two important misconceptions stand in the way of seeing the latter. First is the notion that in its early days policy analysis assumed problems were simpler and to be solved by our best and brightest. That is not how I remember my graduate training. I had the good fortune to have been a student of and worked with Pat Crecine, the founding director of the Institute for Public Policy Studies (now the University of Michigan’ Ford School) and Aaron Wildavsky, a founder of the now Goldman School of Public Policy at UC Berkeley. Two different people you can’t imagine, but one always insisting to his first-generation students that policymaking was complex, while the other was the last person on earth who would say policy implementation was simple.

Of course, there have always been those in policy programs and schools, early and late, who over-simplified policymaking and implementation. During my career, I’ve witnessed the 20-page policy brief reduced to the five-page memo into a fifteen-minute PowerPoint presentation into the three-minute elevator speech and now the tweet. What next on the graduate school syllabus: Telepathy? “The knowing look” in 10 seconds or less?

But none of this should surprise us. Just as calls for more localization are a response to increasing globalization, calls for more and more simplification increase as any field professionalizes and specializes, which policy analysis most certainly has done during a half a century of discipline growth and institution building. Of course, silliness comes with professionalization. I remember a then well-known policy academic argue that the “policy cycle” from policy formulation through policy evaluation was a signal advance over early notions of muddling through and incrementalism. You only need implement something you planned to realize that the cycle’s stage of “implementation” is itself a lethal critique of anything like a formal policy cycle.

–In fact, such simplifications don’t bother me as much as the over-complexification of policy issues into dead-end intractability. Which take me to the second misconception. A reduced form narrative runs like this: Since policy problems have become more complex over time, they must have become (more) intractable. Yes, of course, indicators are there: In the 1970s at the advent of policy analysis as its own field, a key indicator of what is now called “a failed state” was its inability to produce an annual government budget. That happens all over the place now in the US. What I don’t understand is why such is taken to be the signature that things have gotten into the bog-standard “intractably complex” when it comes to comprehension and analysis.

Yes, of course, Current Times are polarized and rapidly changing, but, please, don’t try to persuade me, a product of the 1960s, that Current Times are more polarized–other than in an echt numerical sense of there being more people now than then. Politics, dollars and jerks have always been the center of gravity of policy analysis and public management. I’ve never worked in on a major topic or in an important situation that wasn’t polarized or polarizing.

Nor am I willing to concede that experience today of rapid change differs much from before.

Yes, major issues and more rapid change go together these days, and here too there are more issues to be changing. But has there been a phase shift in the perception of rapidity changed? David Hume, the philosopher, was complaining about the speed of “instant” stock transactions in the mid-1770s, while a century early, complaints were commonly heard about how “affairs here change so fast that one no longer reckons time by months and weeks, but by hours and even by minutes’; “many new, unusual emergencies, such as our forefathers have not known”; increasing “with an inconceivable rapidity” and “in one century more light has been thrown on this science than had been elicited in the preceding period of near 5,700 years” (recorded by historians Istvan Hont, George Parker and Keith Thomas, respectively). When it comes to how public affairs are experienced, it’s still difficult to dismiss the part where plus ça change, plus c’est la même chose.

–It is true it’s harder to recapture that sense of policy analysis recasting difficult problems more tractably in the same way that policy analysis originally recast public administration in the late 1960s and early 1970s, when I first started off. But this admission doesn’t go far enough to the wider truth at work here.

One way I think about what has happened to the policy analysis I’ve known over the last half century is to distinguish between (1) the discipline as taught in graduate schools and the profession that policy analysts think they are joining upon graduation and (2) the profession as it is actually practiced at any one time or place and the actual careers that policy analysts have across time and place.

The adult-me hasn’t had much time for (1)–it’s where the left is treated as completely irrelevant–and I’m quite willing to let it R.I.P. But (2) remains the domain of “it always seems impossible until it’s done” (to paraphrase Nelson Mandela) and, as such, is far too vital to qualify for anything like an In Memoriam.

–The vitality I’m talking about lies in the career being its own optic for recasting policy issues. It’s the closest we analysts get to reflecting on our practice and remaking the next steps ahead.

It is the career that reminds us that, if you will, the eye cannot see itself and that when we describe what is going on right now for the policy issue, “there is always a camera left out of the picture: the one working now” (to quote philosopher Stanley Cavell). To bring the camera into the picture is to recast the picture.[1]

For some the resulting infinite regress is a limitation; for me, it amplifies more to rethink. It’s over a career where it is neither optimism nor pessimism but realism to know when “always polarizing” added to “always changing” necessarily equals “This moment too shall pass, along with its cameras.”


[1] There’s a wonderful story told by the poet, Donald Hall, about how bringing such a camera into the picture changes it. He had heard the following from Archibald MacLeish about the actor, Richard Burton, and one of his brothers:

Then Burton and Jenkins quarreled over Coleridge’s “Kubla Khan.” Jenkins said it was a bad poem: disgusting, awful. Burton praised it: magnificent, superb. Jenkins repeated that it was nothing at all, whereupon Burton commanded silence and spoke the whole poem, perfect from first syllable to last. MacLeish told me that Burton’s recitation was a great performance, and when he ended, drawing the last syllable out, the still air shook with the memory and mystery of this speaking. Then, into the silence, brother Jenkins spoke his word of critical reason: “See?

Table of key entries

Most Important: “What am I missing?,” “Complexity is the enemy of the intractable,” “Power,” “Interconnected?”

Recasting big policy issues: “Poverty and war,” “Healthcare,” “Second thoughts on income inequality,” “Surprising climate change,” “In a failed state,” “Revolts,” and Longer Reads (below)

More recastings: “Policy narratives,” “Recastings #1,” “When the light at the end of the tunnel is the tunnel,” “Loose ends, #3”

Not-knowing and its proxies: “Seeing unknowns,” “Inexperience and central banks,” “Managing inexperience,” “Difficulty at risk and unequal,” “By way of distraction…”

Ignorance and uncertainty: “When ignorance does more than you think,” “Optimal ignorance,” “Uncertain superlatives,” “Stopping rules and contested regulation”

Risk and root causes: “A new standard for societal risk acceptance,” “Three easily-missed points on risks with respect to failure scenarios,” “Risk criteria with respect to asset versus system scenarios,” “Half-way risk,” “Central role of the track record in risk analysis,” “Root causes,” “Frau Hitler, again,” “With respect to what?”

Infrastructures: “The real U.S. infrastructure crisis,” “Innovation,” “Take-home messages,” “Who pays?,” “When high reliability is not a trade-off,” “The market failure economists don’t talk about,” “When ignorance does more than you think,” “Catastrophizing cascades,” “Healthcare,” “Interconnected,” “Stopping rules and contested regulation”

Environment: “Nature,” “Tansley’s ecosystem,” “Eco-labelling recasted”

Catastrophe and crisis: “Catastrophizing cascades,” “Jorie Graham’s systemcide,” “The shame of it all,” “Next-ism,” “The future is the mess we’re in now”

More mess, good and bad: “A different take on the traffic mess,” “Happiness: The mess,” “Who pays?,” “Misadventures by design,” “Loose ends, #2,” “Top-of-the-list thinking,” “Take-home messages”

Betterment and good-enough: “Betterment as ‘yes-but’ through ‘yes-and’,” “It’s better between the James brothers,” “Good-enoughs,” “Good-enough dreamers,” “Professional, amateur, apprentice; Or, As good as the fingernails of Manet,” “‘at sea,’ ‘from on high’”

Economism: “Economism,” “Keep it simple?,” “Loose ends,” “When high reliability is not a trade-off,” “Short and not sweet,” “The missing drop of realism,” “The market failure economists don’t talk about”

Longer Reads: “Ammons and regulation,” “The next Constitutional Convention,” “Recalibrating Politics: the Kennedy White House dinner for André Malraux,” “Blur, Gerhard Richter, and failed states”

Something less complex?: “Why this blog?,” “Red in tooth and claw,” “What kdrama has taught me”

Risk criteria with respect to asset versus system scenarios

–Focus on an island in the western California Delta, say Sherman Island, and consider criteria that engineers rely on for establishing priorities with respect to reducing levee fragility there (the island’s surrounding levees are needed because its productive areas are considerably below water level):

  • Criterion 1. Levee fragility priority can be set in terms of the weakest stretch of levee around the island, i.e., the stretch of levee that has the highest probability of failure (Pf). This has obvious implications for collocated elements from different infrastructures, e.g., a very high levee Pf should counsel against plans to place, say, a huge chemical tank facility next to it. (You’d assume commonsense would commend this as well.)
  • Criterion 2. Levee fragility priority can be set in terms of the stretch with the highest loss of life (and/or other assets) arising from levee failure. If the levee breaches where most island residents live, then there is considerably less time for evacuation. Clearly, consequences of failure (Cf) are important here, and this criterion is about the levee stretch that has the greatest risk of failure, not just probability of failure.

Sherman Island’s weakest levee stretch, at the time of research, was said to be on the southwest part of the island; the stretch with the greatest loss of life appeared to be on the eastern and south-east side (where most residences seem to be). Other factors constant and from the perspective of Criterion 2, it is better the weakest stretch of levee is on the other side of the island (according to Criterion 1), so as to ensure more time for evacuation.

–A third criterion reflects the extent to which the island’s levee infrastructure is part and parcel of a wider interconnected critical infrastructure system (ICIS):

  •  Criterion 3. Levee fragility priority can be in terms of stretch that has the greatest risk to the entailed ICIS. ICIS risk of failure is not the same as risk of levee failure only, as stretches of Sherman Island levees are in fact not just elements in the levee system there but also elements in other critical infrastructures. For example, on Sherman Island, there is the levee stretch with Hwy 160 on top; there are also other stretches that serve as the waterside banks of the deepwater shipping channels; there is another stretch that serves to protect a large wetland berm (as fishing and bird habitat). If those stretches of levee fail, so too by definition do elements fail in the deepwater shipping channel, Hwy 160 or the Delta’s endangered habitat.

Criterion 3 compels the risk analyst to ask: What is the effect on the road system or shipping system or wetlands system, when that shared ICIS element on Sherman Island fails? For instance, if a stretch of Hwy 160 fails, road traffic in the Delta would have been detoured; if a stretch of the deepwater shipping channel fails, shipping traffic would have been rerouted to other ports; and so on. Note that in some cases the service cannot continue because there are no default options, e.g., the Sherman Island wetlands berm in terms of its habitat and fish can’t be “rerouted” were its protective levee to fail.

From the perspective of Criterion 3, the risk analyst’s question with respect to the greatest ICIS risk becomes: What infrastructure system that shares one or more ICIS elements on Sherman Island would be affected the most in terms of increasing the probability of its failing as a system, were those Sherman Island elements to fail? The answer: A levee breach anywhere on Sherman Island would increase the probability of the closing the key pumps for the State Water Project. That is, the Pf of the state and federal water projects would increase were Sherman Island to flood, because saltwater would be pulled further up from the San Francisco Bay into the freshwater Delta.

–In short, the three risk assessment criteria—others are possible—differ appreciably as to where risk analysts focus attention in terms of levee fragility: the weakest stretch (Pf) may not be the same stretch whose failure would have the greatest loss of life and property (Cf), while any stretch that failed would pose the greatest ICIS risk (namely, the probability that an ICIS element failing increases the failure of one or more of the constituent systems sharing that element). In the latter case, the risk not only for the adjacent levee system in the western California Delta (i.e., a failure of Sherman Island levees would increase erosion and seepage pressure on islands next to it), but also for the freshwater supply system from the north to the south of the State.

You would expect that calls for more and more “inter-organizational coordination” would have to be prioritized in light of these distinctions. You’d be wrong, worse yet for infrastructures whose technical cores are already so complex as to require the full time attention of their respective reliability professionals.

Blur, Gerhard Richter and failed states (longer read)

Summary

Nothing seems further from war and failed states than the early photo-paintings of Gerhard Richter. This entry demonstrates otherwise.

Throughout his long career, Richter has been indefatigable in explaining why and how he produced the paintings. Why mechanically reproduce a photograph enlarged onto canvas and then blur the oil painted image with a squeegee passed across it? By way of answer, Richter says he was aiming at inserting chance into the painting. The blurs enabled him to see what had been a familiar photograph as if for the first time and with new objectivity (his term).

Transforming a photograph into a photo-painting and then blurring it is an optic with which to become alert to a quite different object from that known in the photo and in contradistinction to those who take photos as a baseline for objectivity. The blur foregrounds what had not been seen to that point, that is, not recognized up to then, difficult to discern otherwise, and the experience of still having something new to encounter.

I wed Richter’s concept of blurring to a “policy palimpsest,” the social science notion that controversial policies are themselves the composite of policy arguments and narratives that have overwritten each other. Any composite argument read off a policy palimpsest reads sequentially—nouns and verbs appear in order and meaning is made—but none of the previous inscriptions shine clear and whole through the intercalated layers, effacements, and erasures. Arguments assembled from the palimpsest have been blurred, intertwined and re-rendered for current (often controverted) purposes. The analytic challenge is to read any composite argument with its blur visible in order to acknowledge and probe what has been rendered missing.

To see how a policy palimpsest works, the role of blur as its own clarifying optic and how this recasts a very difficult political issue, I discuss an article on the failed-states rationale put forth in the George W. Bush Administration’s 2002 National Security Strategy. I then draw out wider implications, namely: Where are this century’s new democracies to come from, if not from the failed states?

Introduction

“Palimpsest” refers to older documents and tablets whose text and images have been overlain by more recent ones, without entirely effacing earlier inscriptions. A first, if not the first, explicit use of “policy palimpsest” is in Harvey Simmons’ 1982 book, From Asylum to Welfare: “During the nineteenth century, not one but a series of mental retardation policies were superimposed on one another, with newer policies obscuring, blurring or relegating older policies to minor importance, although never entirely replacing them… (my italics)”.

Longstanding policy issues are typically described as progressing, regressing, or waiting to be modified for the better. New evidence, it is said, comes to light with respect to assumptions, and over the long haul policies change or evolve. The policy palimpsest perspective offers a different take. Major policy is a pastiche of overwritten policy arguments and narratives without presumption of sequential change and learning. Each erasure or effacement takes the policy audience further away from any kind of “original” beginning, middle and end for the policy in question.

This is not news in the study of policy and management. What the policy palimpsest metaphor highlights is the very partial nature of reworking policy. Little from previous inscriptions comes to us intact or pane-clear through a policy palimpsest for a major issue. No inscription or point made shines bright and clear through the layers, fractures and lacerations in a palimpsest. Arguments putatively read from and off the palimpsest are in fact blurred; they are elided and reassembled for current purposes. They are in effect composite arguments, whose linearity and coherence are deceptive.

The failed states argument

To see how a policy palimpsest works, turn to a 2014 Foreign Affairs article, “The Rise and Fall of the Failed-State Paradigm: Requiem for a Decade of Distraction,” by Michael Mazarr, Professor of National Security Strategy at the National War College.

Mazarr starts by arguing that it was with 9/11 and the Bush administration’s 2002 National Security Strategy that the failed states argument came to the fore. “America is now threatened less by conquering states than we are by failing ones. We are menaced less by fleets and armies than by catastrophic technologies in the hands of the embittered few,” according to the Bush Doctrine. Mazarr emphasizes how accepted this starting point and conclusion were at that time:

The Democratic foreign policy hand Susan Rice, for example, wrote in 2003 that Bush was “wise to draw attention to the significant threats to our national security posed by failed and failing states.” Where the right emphasized security and terrorism, the left added humanitarian concerns. Development specialists jumped on the bandwagon as well, thanks to new studies that highlighted the importance of institutions and good governance as requirements for sustained economic success. In his 2004 book, State-Building, the political scientist Francis Fukuyama wrote, “Weak and failing states have arguably become the single most important problem for international order.” The Washington Post editorialized the same year that “weak states can compromise security — most obviously by providing havens for terrorists but also by incubating organized crime, spurring waves of migrants, and undermining global efforts to control environmental threats and disease.” This argument, the paper concluded, “is no longer much contested.” A year later, the State Department’s director of policy planning, Stephen Krasner, and its newly minted coordinator for reconstruction and stabilization, Carlos Pascual, argued. . .that “in today’s increasingly interconnected world, weak and failed states pose an acute risk to U.S. and global security. Indeed, they present one of the most important foreign policy challenges of the contemporary era.

This failed states postulate, however and in spite of initial widespread support, encountered a great many problems. The first three Mazarr identifies are quoted at length without edit:

The threat posed by weak and fragile states, for example, turned out to be both less urgent and more complex and diffuse than was originally suggested. Foreign Policy’s Failed States Index for 2013 is not exactly a roster of national security priorities; of its top 20 weak states, very few (Afghanistan, Iraq, and Pakistan) boast geostrategic significance, and they do so mostly because of their connection to terrorism. But even the threat of terrorism isn’t highly correlated with the current roster of weak states; only one of the top 20, Sudan, appears on the State Department’s list of state sponsors of terrorism, and most other weak states have only a marginal connection to terrorism at best.

A lack of definitional rigor posed a second problem. There has never been a coherent set of factors that define failed states: As the political scientist Charles Call argued in a powerful 2008 corrective, the concept resulted in the “agglomeration of diverse criteria” that worked to “throw a monolithic cloak over disparate problems that require tailored solutions.” This basic methodological flaw would distort state-building missions for years, as outside powers forced generic, universal solutions onto very distinct contexts.

The specified dangers were never unique to weak states, moreover, nor would state-building campaigns necessarily have mitigated them. Take terrorism. The most effective terrorists tend to be products of the middle class, often from nations such as Saudi Arabia, Germany, and the United Kingdom, not impoverished citizens of failed states. And terrorist groups operating in weak states can shift their bases of operations: if Afghanistan becomes too risky, they can uproot themselves and move to Somalia, Yemen, or even Europe. As a result, “stabilizing” three or four sources of extremist violence would not render the United States secure. The same could be said of threats such as organized crime, which finds comfortable homes in functioning but troubled states in Asia, eastern Europe, and Latin America.

As the scholar Stewart Patrick noted in a 2006 examination of the purported threats issuing from weak states, “What is striking is how little empirical evidence underpins these assertions and policy developments. Analysts and policymakers alike have simply presumed the existence of a blanket connection between state weakness and threats to the national security of developed countries and have begun to recommend and implement policy responses.”

And although interconnectedness and interdependence may create risks, the dangers in such a world are more likely to come from strong, well-governed states with imperfect regulations than weak ones with governance deficiencies. Financial volatility that can shake the foundations of leading nations and cyber attacks that could destabilize energy or information networks pose more immediate and persistent risks than, say, terrorism.

A third problem was misplaced confidence about the possibility of the mission’s feasibility. The last decade has offered an extended, tragic reminder of the fact that forcible state building simply cannot be accomplished by outsiders in any sustainable or authentic way. When a social order has become maladapted to the globalizing world — when governing institutions are weak, personalized, or kleptocratic; corruption is rampant; and the rule of law is noticeable by its absence — there are simply no proven methods for generating major social, political, economic, or cultural change relatively quickly.

As the Australian political scientist Michael Wesley argued in a brilliant 2008 essay, state weakness is primarily a political problem, and yet state building is often conceived and executed as if it were an apolitical exercise. “The intention of remaining aloof from politics while concentrating on technocratic reforms has proved unrealistic,” he wrote. “Even seemingly technocratic tasks confront international administrators with essentially political decisions: the nature and basis of elections; which pressure groups to consult; the reintegration or de facto separation of ethnic communities; school curricula; degrees of public ownership of enterprises; the status of women; and so on. However technocratic their intention, state-building missions inevitably find themselves factored into local rivalries.”

A policy palimpsest perspective recasts this line of argumentation, because this perspective clearly has difficulties with a governance or statecraft that centers around “here’s what we learned as to why it [the Bush Doctrine] didn’t work” or “we always knew it wouldn’t work.” Both may be true, but they fall well short of pushing the truth further.

–The wider truth is that Mazarr’s critique is itself a composite argument from the remains of other arguments that have been around for some time and which were reinscribed in new ways when it came to the “failed states” doctrine. What Mazarr identifies as the lack of definitional rigor in isolating key features of failed states is the result not only of new analysis since 2002, but also of the erasure of elements in what definitions we had had, all combined with the recognition that the lack of such rigor drives an uncertainty indistinguishable at times from a sense of “terror.’

Any “we used to know X, but now we have come to know Y” is too sequential and deterministic for what has been in effect an ebb and flow of government policymaking more akin to accretion than evolution. The most interesting feature about the Bush Doctrine isn’t that it proved wrong in ways that matter, but that it was overwritten and continues to be by all manner of subsequent contingencies. Where are those points we heard at the time of 9/11, but which have since been scoured out of the record, at least in the pages of Foreign Affairs? Namely: It was a tragedy waiting to happen, and now that it happened, an opportunity for America to take the lead in a new rapprochement with the Islamic World. These arguments were dropped from policy discussions not because of “new evidence and analysis,” but because they were effaced outright or blurred over. In this way, “straight-forward” policy arguments since 9/11 have been attempts to bowdlerize the policy palimpsest.

The image of the burning twin towers of the World Trade Center has been etched into the policy palimpsest of failed states in such a way as to obscure other lines and images of argument. The criticism that the Bush Doctrine propelled along one trajectory when others were available misses the other paths that were not there until 9/11 and the subsequent contingency-fueled implementation of the Bush Doctrine. To be clear as I can, any specific reading, such as the Bush Doctrine, off of a military-strategy policy palimpsest is not just specific with respect to historical moment and context; whether or not “universalized” as principles, the composite argument arises from, when not actually reflecting, the gaps, contradictions and lack of consistency in that policy palimpsest.

When it comes to major, longstanding policy issues, the coherence of the composite argument is matched by the inability to achieve exhaustive insight into the originating palimpsest. As such, no major policy sheds its shattered origins in the palimpsest; any composite argument brings the entire policy palimpsest with it. In this way, no major policy is or could ever be only what it says it is. At best, a composite argument seeks to be the arolect or prestige language of its creole palimpsest.

–Each composite argument is always open to interrogating, “What is being missed that was effaced from the palimpsest in order to assemble the argument?,” since no composite argument can be final. As a policy palimpsest sources all manner of composite arguments, any one is “revisable,” even at the moment the policymaker is insisting “this is the right policy for the right time in the right place.” Thus, when I insist that a major policy issue is uncertain, complex, conflicted and incomplete at the same time—no matter how coherent the current composite argument—it is because its policy palimpsest itself reads that way—each reading an arrested disruption, the palimpsest serving as a brake on isolating any single argument, all readings reflecting the analyst’s curatorial function of assembling an installation.

With all of this going on, how then to take Mazarr’s argument seriously? But that’s my job, not Mazzar’s. I don’t expect Mazarr to do my work as an analyst. It’s good enough that he provides an argument as far as it goes. I, however, am the one to find the blur in his argument and make visible what is there already and already missing. Once you have identified what is missing from the composite but was in the palimpsest being read off (no guarantees here), you have identified means to recast complex the issue in new (renewed) ways. Mazarr makes it easier for us by leaving traces for the blur in his critique, which is complicated by all manner of elections, pressure groups, ethnic communities, school curricular, degrees of public ownership, and that wonderful admixture, “and so on.”

I keep referencing “blur.” This is because blurring, rather than the conventional clarifying, turns out counterintuitively to be a powerful analytic optic with which to parse and rethink major policy. The productive “blurring” I have in mind has been a centerpiece in the photo-paintings of Gerhard Richter, the German painter. His reasons for blurring go to the heart of understanding policy palimpsests and composite arguments, like the failed states one. This entry ends with one such rescription based in the blurred and blurrable Bush Doctrine.

The role of blurring in Richter’s photo-paintings and its implications

Early in his career, Gerhard Richter copied photographs and enlarged them onto canvas as oil paintings and then mechanically blurred them (e.g., with a squeegee). Below are three of Richter’s famous photo-paintings from the 1960s incorporating the blurring effect (accessed online on November 19, 2019 from pinterest.com):

Woman with an Umbrella (Frau mit Schirm), 1964, oil on canvas.         

Aunt Marianne (Tante Marianne), 1965, oil on canvas.

Uncle Rudi (Onkel Rudi), 1965, oil on canvas.

When I tell you that the original photographs for the paintings were of Jackie Kennedy at JFK’s funeral, Richter’s aunt who was sterilized and starved to death by the Nazis, and his uncle who fought for the Nazis and died in World War II, you appreciate that more is occurring in the photo-paintings than first meets the eye.

My proposition is that a Richter photograph versus his photo-painting made from it are much like an argument we say we directly draw from the evidence versus the composite argument that is assembled from the policy palimpsest.

The photo, as has often been reiterated, is (too) easily treated as veristic depiction. “A photograph does this more reliably and more credibly than any painting,” according to Richter; the photo “usually gets believed, even where it is technically faulty and the content is barely identifiable”. It’s hackneyed that photos come to us as literal, representing things as they are, an unvarnished mechanical record of mimetic fidelity rather than something like, say, what’s reflected in a puddle. “We believe that photographs reflect reality and that the information relayed by a photograph is much more precise and convincing than even the best drawing,” says Richter.

Such too was the aspiration of the mid-20th century policy and management sciences, where our arguments would correspond to reality far better than ever before. So too, little else seems quite as direct as the image of the burning towers for what has gone terribly wrong. Yet, while we aspired to photo-clear arguments in policy, analysis and image are true only as far as they go in a world reflected in and off of all its policy palimpsests.

Richter copied a photo onto a canvas because that is not what he ended up with in the painting. “As a painting, it changes both [the photo’s] meaning and its information content.” For “even when I paint a straightforward copy, something new creeps in, whether I want it to or not: something that even I don’t really grasp”. “I’ve never found anything to be lacking in a blurry canvas. Quite the contrary: you can see many more things in it than in a sharply focused image”.

Just how does the new creep in? By deliberate means, but with accidental and contingent effect. The blurring was achieved by wiping horizontally across the canvass so that no detail stood out and everything appeared in motion. “Factors like overexposure and lack of focus found their way in unintentionally, but then they had a decisive effect on the atmosphere of the pictures”.

–But to what decisive effect?

Now comes the key point: Contingency enters into play through blurring and posing emerging elements that you now have to think about further. Having produced new effects became the occasion for thinking about what they meant for the painting, and this “thinking more about” introduced objectivity directly into the thinking. Richter explains the link between contingency (what he calls chance) and the ensuing objectivity:

What part does chance play in your paintings?

An essential one, as it always has. There have been times when this has worried me a great deal, and I’ve seen this reliance on chance as a shortcoming on my part. . . .[But] I need it in order to carry on, in order to eradicate my mistakes, to destroy what I’ve worked out wrong, to introduce something different and disruptive. I’m often astonished to find how much better chance is than I am.

So this is the level on which openness is still thinkable and credible in real terms? Chance?

It introduces objectivity, so perhaps it’s no longer chance at all. But in the way it destroys and is simultaneously constructive, it creates something that of course I would have been glad to do and work out for myself.

If I understand Richter, transforming a photograph into a photo-painting and then mechanically blurring it became a way to render an image as if for the very first time: to become alert to a quite different object from that known in the photo and in contradistinction to those who take photos as the baseline for objectivity. Blurring pops the bubble of photo-clarity, etches away any monochrome facticity. Blur winkles out objectivity from a masquerading photo-clarity—shows photo-clarity to be at best an identikit—and takes objectivity farther than it had gone before. Blur, if you will, is the generosity of chance made objective.

Anyone who has tried to operationalize a project plan or blueprint knows just this sense in which its implementation produces something objectively real via the experience of contingency and surprise amped up by difficulty and learning, however hard-won.

Yet more is going on than creating something different and new, as separate objects of knowledge, for even after that the photo-painting and the originating photo continue to resonate and reverberate, like two tuning forks in sympathetic vibration next to each other. Primarily, the photo-painting defamiliarizes any “immediacy” the photo may have had. When I wrote earlier that it was I who blurred Mazarr’s failed states argument, I am moving myself away from how I feel about the image of the burning twin towers. By treating the image as inevitably blurring and blurrable—the double exposure that is the image branded into the brain as well as what is occluded in the process and thereafter—I claw my way to a kind of candor: In seeing things as if for the first time I obligate myself to evaluate the things I am now stuck with, including that which has now gone missing.

The point is not that the photo-painting is a hazy, smudged, imperfect rendering of what is otherwise photo-clear. Photo-clarity is a kind of clarity, but it does not go far enough. Blur, in contrast, is the kind of clarity that brings to light the presence of contingency associated with photo-clarity as well as the new that emerges, and with both the loss of any immediacy that photo-clarity had.

What does this all mean for the failed states argument?

So what if a composite argument juxtaposes words, phrases and statements from other documents and sources? So what if in the process a great deal has been effaced, erased or otherwise ignored in passing off this combined assemblage as an original or straightforward argument in and of itself?

My answer: To read or listen to a composite argument is to place its readers and listeners under the cognitive demands of evaluating that argument, particularly in terms of what has been rendered missing in order to be read off coherently for the purposes sought. The obligation is to resurface the blur. But—and here is the key moment of reflexivity—to evaluate a composite argument becomes its own occasion to blur that composite argument. This means there is enormous cognitive pressure to continue to overwrite the policy palimpsest with further interpretations and effacements. Where so, then analysts should be thinking more about that blurring ahead—a blurring that works against any single focus of any “single” composite argument—for the failed states argument.

To that end, I’d like to suggest that anyone keen on the failed states argument might be better off spending time in the distractions of thinking about how failed states are to be the source of 21st century democracies rather than fountainhead of failed or worse. I had been complaining about all those “except-Africa” narratives, where everything is said to work “except in Africa,” when Aaron Wildavsky, the political scientist, upbraided me: “Now, where do these critics think the next century’s democracies are going to come from?” Our current preoccupation with failed states falls short of that need to go further. Calling them failed states registers our own historical confusions and certainties about democracy and its changing practices.

–If I’m right, the challenge of analysis in a policy palimpsest world of always-revisable composite arguments is to be two steps ahead of any clear-cut argument and one-step ahead of any new composite argument inscribed back onto the palimpsest. Two steps ahead because major policy arguments cannot be that clear-cut, where the analyst must find the blur of what’s missing. One step ahead because inscription back onto the palimpsest is itself never diamond-sharp across a clean surface. Policy arguments that are urged on us, it bears repeating, because of their mathematical elegance, engineered simplicity, crystalline logical structure or ineluctable import are a perilous kind of knowledge. They only wink at complexity; they certainly are not to be found via a policy palimpsest.

This matters because any policy palimpsest offers up the prospect of recovering blur and the forgone prospects. Analysis in such a world is like fly-fishing, where each artificial fly cast onto the water’s surface is already hooked to what is out of sight for what seems not to be there.

Principal sources:

Mazarr, M. (2014). The rise and fall of the failed-state paradigm. Foreign Affairs (January/February): 113-121.

Richter, G. (2009). Gerhard Richter: Writings 1961 -2007. Eds., D. Elger and H. Ulrich Obrist, D.A.P. (Distributed Art Publishers): New York, NY.

Roe, E. (2019). Social Complexity, Crisis, and Management. In Oxford Encyclopedia of Crisis Analysis. Oxford University Press. Online Publication Date: Aug 2019.

Simmons, H. (1982). From Asylum to Welfare. National Institute on Mental Retardation: Ontario, Canada.

A different take on the traffic mess

–Start with a simplified assumption to be problematized shortly: The net monetary value of any transportation system aggregated across all car users increases with the number of cars using that system up to the system’s carrying capacity for cars, which if exceeded leads to a decline in net value. This is shown in Figure 1’s net monetary value curve, AA’, which falls after reaching the system’s limit in carrying more automobiles (CC):

Assume the only value of interest is the value of the transportation system to car users. Assume initially that CC is fixed and that the current number of cars on system roadways exceeds that value. It may be possible to add new roads and new lanes over time, thus moving CC to the right (“supply management”). It may also be possible to reduce the number of cars to the left of CC by congestion pricing, vehicle taxing, and other tolls (“demand management”). Assume, however, that such interventions are not possible anytime soon (or if possible, their effects are not to be realized soon).

What can the transportation professional do instead in the face of congestion?

–By way of answer, we must better understand key terms. Just what is the “value” of the transportation system from the perspective of the car user? The benefits and costs are difficult to monetize fully, but assume the vertical axis of Figure 1 indicates their net positive value. Further benefits follow from the other ways to increase the value of the transportation system, even when it is not possible to increase the number of cars on the roads, e.g., through reducing average car size or narrowing lanes. Value also increases, ceteris paribus, when the number of passengers in a car increases (this being, the important issue of shared mobility and/or the number of uses to which the car is being put by its users increases).

Once other net benefits are added, the net monetary value curve rises, illustratively, to AB in Figure 1, with a gradual, delayed decline after CC being reached. More multiple-use vans on the road, especially if they replace existing vans and vehicles, increase the value curve before carrying capacity is reached. Once carrying capacity is exceeded, the time lost being stuck in traffic will be offset for some period by being able to do more things in one’s vehicle than before.

Diagrammatically, the increment in value between AA’ and AB, particularly after CC, is the value car users attach to a good mess coming out of the bad mess of the formal transportation system.

This is the value car users attach to producing a mess (AB) better than the one (AA’) that would have happened instead. Other thing equal, the aim of transportation professionals is to enlarge that increment. For example, not only do professionals want people “to get their best ideas” while stuck in traffic, they want many more people to do so.

–The simplified figure suggests two other ways to change net value. One is to redefine carrying capacity; the other is to redefine the “transportation system” and its services of interest. Carrying capacity has been a popular concept in modeling traffic congestion, its intuitive appeal being that there must be a limit to the number of cars that a system can accommodate, other things constant. As other factors are rarely constant, carrying capacity is necessarily a variable rather than a given.

This leads to the second way to alter net value. Just what is the “transportation system” being evaluated in terms of a good or bad mess? It need not only be the official system discussed of far. It is possible to redefine the transportation system of interest by changing the scope and knowledge bases for the “system” being analyzed and managed.

Imagine you are a professional in the Regional Transportation Authority (RTA). You have just undertaken a stratified random sample survey of RTA residents as to what they perceive to be locally successful transportation interventions about which they have first-hand knowledge. Focus groups and public meetings have subsequently been held, identifying other perceived successful interventions in the region. The currently preferred approach of RTA professionals is to pick from the list, say, interventions 3, 57 and 72—all of which fit in with ongoing planning and business as usual—and seek to extend them elsewhere in the region.

You however take a different tack with respect to the list. Assume it identifies interventions that include traffic calming sites in some RTA neighborhoods, increased off-street parking in others, widening streets at different sites, adding bicycle lanes in another set, and so on. Your task is to determine an implied or de facto “transportation system(s)” that link these discrete (groups of) sites together.

–The implied systems, if any, are more than road networks that connect the sites concerned. The existing availability and distribution of garages for cars, both above and below ground, connects sites as well. Yet the Region’s transportation system, like many RTAs, does not consider the de facto, informal network of public and private garages to be a major point of intervention in improving the formal, official transportation system. In this way, the informal system(s) can be conceived as lying in the domain of the traffic professionals between system patterns they perceive and localized scenarios that they never thought about until this point.

The challenge in the constructed example is to ask, What are we missing by focusing only on the formal transportation system and in answer to see what could or does connect sites of successful interventions into a system or network that can be supported by transportation professionals.

–One such informal system is illustrated in Figure 1. Here the transportation system is an informal one, i, implied by the connected sites, with its value curve ACi and its carrying capacity, CCi (which would now be cast in terms of local knowledge and familiarity with specific traffic patterns).

Diagrammatically, ACi is the net value car users attach to a good mess that could go bad at some point near or after CCi. If traffic professionals cannot squeeze good messes out of the bad mess that congestion has become (i.e., realize and increase a value increment between AA’ and AB), they can identify, protect and enhance different systems that are not (yet) bad messes.

What should the professionals do if there are neither informal systems to be improved nor any value increment to be realized in the formal transportation system? The “best” they can do under such circumstances is to try to keep Figure 1’s AA’ as “close” to the left of CC as possible or on the non-declining portion of AA’, should it exist, after breaching CC. Barring either, the professional is left with trying to halt or delay the further decline of AA’.

–Four kinds of good messes are, in other words, to be distinguished in the constructed example. They are the product of two states and transitions, namely, what start out as good or bad messes and what end up as more of a good mess or less of a bad one. Table 1 summarizes the four positions:

Table 1: Four Types of Good Messes in Traffic Congestion

In case it needs saying, each is a good mess in its own right, though perceptions and expectations about the four cells vary considerably.

Stopping rules and contested regulation

–Society’s critical infrastructures often have a unique stopping rule: the pressures of real-time mandating safe and continuous performance when it matters, right now.

One product of their dread of real-time system failure has been the decision rule that it’s better to entertain a false alarm than have a true alarm ignored. This is observed in the critical infrastructures we studied and their reliability management.

There is, though, no reason to believe that what an infrastructure considers to be “alarming near-hits” are the same as other infrastructures consider, let alone what government identifies through its own disaster scenarios underlying legislation and regulation. Even if precaution were a dominant social value—and it isn’t—it would not be the same value dominant across all society’s critical infrastructures.

It should not be surprising, then, that the government regulatory arena becomes the site of contestation between any “precautionary principle” proposed for society and the stopping mechanisms and rules already in place by its key critical infrastructures.

–A form of societal regulation already occurs in real time by virtue of critical infrastructures prioritizing systemwide reliability and safety as social values (while in the same instant infrastructures serve as the indispensable foundation of real-time economic activities). Societal values are, however, differentiated within infrastructures, not only across them.

Consider the commonplace that regulatory compliance is “the baseline for risk mitigation in infrastructures.” Even so, there is no reason to assume that compliance is the same baseline for, inter alios, the infrastructure’s micro-operators in the field, including the eyes-and-ears field staff; the infrastructure’s headquarters’ staff responsible for monitoring industry practices for meeting government compliance mandates; the senior officials in the infrastructure who see the need for far more than compliance by way of enterprise risk management; those other professionals in the same infrastructure responsible for thinking through a wide range of “what-if” scenarios that vary by all manner of contingencies; and, last but never least, the infrastructure’s reliability professionals—its control room operators, should they exist, and immediate support staff— in the middle of all this, especially in their role of surmounting any (residual) stickiness by way of official procedures and protocols undermining real-time system reliability.

These consequential differences in orientation when it comes to “baseline compliance” mean societal values of systemwide reliability and safety can be just as differentiated and distributed as these staff and their responsibilities are. To put it another way, where highly reliable infrastructures matter to a society, it must also be expected that the social values reflected in these infrastructures not only differ across infrastructures but within them as well.

–Why is this insistence important? It’s routine but misleading to say that “government” has allocative, distributive, regulatory and stabilization functions, and leave it at that—when critical infrastructures are their own allocative, distributive, regulatory and stabilizing mechanisms for generating and distributing social values around societal safety and security, now broadly writ.

Yes, government relies on infrastructures to meet its own functions and, yes, there is an overlap and dependency between the two sets of functions. But the point is infrastructures have their own institutionalized regulatory niche—who else can identify and correct in real time inevitable lapses in procedures promulgated by the official regulators of record?—in addition to the uniquely allocative, distributive and stabilization functions the infrastructures also have.

Sadly, few think to ask, let alone study, how these critical infrastructures—many of which are privately owned or managed in the US—independently affect societal risks, social values and societal regulation. If they did study, they would better understand the extent to which infrastructures having stopping rules enables those in government not to.