New environmental narratives for their End Times (longer read)

The end-time crisis of next-ism. Many ecologists and environmentalists I’ve familiar with insist that more things can go straight-out, hair-raisingly wrong than they can go right. It is easier to mismanage an ecosystem than it is to manage it. Ecosystem collapse is more certain than ecosystem sustainability; negative externalities are to be expected, positive ones not. What happens next is worse.

Nature on its own is too complex to control, but our mismanagement of nature unleashes forces we ourselves cannot control. Probabilities of large system failure and cascades are primed to flip to 1.0 in no time flat.

If livestock numbers are rising, it must be because of “a tragedy of the commons.” If resource extraction is going up, it must be because of “globalization.” If trees are being cut, it must be “deforestation,” however many trees are being planted elsewhere. If the Sahel moves, it must be because of “desertification” or “worsening global climate change.”

They insist that the majority of people don’t see how bad things are, only they do; that their minority has no power, only the others have; and that it’s never too late for serious talking, but it’s too late to expect the majority to give them a serious hearing.

We must manage the planet’s resources better, but no one can expect technology to help us do so. Economic growth is never a sufficient condition for improving the environment, while economic growth’s irreversible impacts on the environment are always a sufficient condition for precaution. Except, however, when failure is not an option! So much is uncertain that anything is possible, and “thus” everything must be at risk. Whatever humans touch they make worse, this Barry Commoner’s Third Law of Ecology.

What worked yesterday—the very thing we thought we knew and could depend on—bites back today, with tomorrow looking very much the worse. The once-good is the now so-bad. Indeed, what follows good and bad is always bad.

Let’s call my colleagues’ standpoint, next-ism.

This—realism, free-floating anxiety, existential panic, dog-whistle alarmism—describes a world clearly not made to my colleagues’ specification. There is not the slightest intimation or whiff of possibility that the decades of environmental advances since the 1960s have been a noble experiment. Nor is there acknowledgement it’s no surprise that if you spread environmental practice worldwide—plant more trees and such!—you spread tree diseases worldwide, or such.

Nor is there a scintilla of recognition that their exhortations to get us to do the right thing by way of the environment pale and wither before the historical record of really-existing humans with real problems in real time who do not follow all orders given them, even in the most authoritarian of regimes (as we now know for communist East Germany and China).

Nor is there a scintilla of recognition that the major feature of their disaster scenarios aren’t the disasters but the massive lack of attention to the multiple ways necessary to triangulate and increase our confidence that these disasters can happen, given they are so grave and concerning in the first place. But there’s no time for that!, next-ism asserts.

Where does Next-Is-Worse leave us?

For me, it’s easier to understand why “the environmental movement” is blamed for failing to stop or otherwise mitigate anthropogenic climate change, species extinction, and/or biodiversity loss. Corporate and economic interests clearly can and do brainwash us into believing things are better than they are environmentally. We don’t need any more evidence to substantiate that! But one can scarcely credit the same interests for having brainwashed my colleagues into next-ism.


So, what to do? Instead of rushing to the counter-evidence, let’s assume that next-ism is warranted. The question then becomes: Even if true, can we push next-ism further by way of specifics?

I think we can. For that matter, when you push next-ism toward its logical and empirical conclusions, it quickly begins to look less fatalistic.

–In this thought experiment, let’s agree that the Anthropocene’s rotten core is modernity—international capital, American consumerism, global urbanization—while in the same breadth insist all this is best described in the very terms of modernity: Anything and everything is at risk; all thinkable risks are premonitory; any can be catastrophic. Ensure this angst has no closure, nothing prevents the proliferation of worst-case scenarios, the dose makes the poison, how then to plan…

Oops, did I write, “plan”? Since when are epochs a planning horizon for anything? And here you thought policy and management in the Holocene were difficult!

Economists will have to give up their discounting the future into present value terms, as the future is Anthropocene shockSHOCKs. Engineers and ecologists dither with resilience and adaptive capacity, as if bouncing back or forward is optimal over a longue durée that by definition can’t be optimized. Our predictions—and we do insist on forecasting!—will be as effective as predicting the next poem from the poet’s body of work.

We’ll look back at relegating “progress” to the scare quotes of always-late capitalism as the easiest thing humans did in the Anthropocene. All the while, alarmism fills the vacuum left behind by lack of remedy-and-implementation at a level of granularity that global and planetary explications of cause-and-effect do not have.

–The problem is this: The long-term and the planetary are deployed so as to nail home the interconnectivity of it all. Everything is connected with everything else—without however acknowledging this must also mean nothing is completely reducible to anything else. Relations stop nowhere, novelist Henry James put it, but they are nevertheless discrete relations. We can’t ignore irreducible particularity just as we cannot ignore interrelatedness. The burden of proof, however, is on the universalizing interconnectionists to detail why and how and in what forms this messy, vernacular particularity—and its allied notions of “case” and “context”—arise and endure.

Specifics matter more than ever, precisely because we are in the Anthropocene.

–The specifics I have researched entail a more granular focus in environmental scenarios on real-time operations of human societies’ key critical infrastructures within a regional context–especially if your concern is as environmental as those that drive the Anthropocene:

  • Granular because risk and uncertainty are always with-respect-to specific failure or accident scenarios–and the devil is in the details of the scenarios;
  • Real-time operations because the measure of effectiveness is to manage effectively now and within the Anthropocene;
  • Operations of key infrastructures because the reliability and safety of these large socio-technical systems–think critical energy and water supplies–are not only vital to society, immediately, but are often based in ecosystem services mandated for restoration or sustainability; and
  • Within a regional context because Global Climate Change modeling and other types of environmental modeling accept the region as the unit of analysis for near-term risk and uncertainty management. (High-resolution models using LIDAR data and other GIS approaches already exist that provide climate-related flooding and wildfire information useful for critical infrastructures when it comes to their nearer-term cycles, e.g., for investment and depreciation purposes.)


But what are the specific environmental scenarios? Some environmental next-ists might say that the above is too little/too late or that the time left doesn’t allow for such fine points. Others would, I think, want to see what these alternative scenarios look like before concluding that going any further is a waste.

If the challenge is to identify specifics—that more granular focus on real-time operations of societal institutions within a regional context now that we are in the Anthropocene—it pays studying those whose current jobs are to do just that.

Five (5) groups who seek to do more than cope with the real-time Anthropocene are identified and sketched in below.

1. One group is found in the control rooms and surrounding support staff of large critical infrastructures--the villains of the piece in many environmental crisis narratives. Yet these infrastructures, particularly water and energy, are based in ecosystem processes and services and many operate under the dual mandate of maintaining service reliability while at the same time safeguarding, if not actually restoring, associated ecosystems.

The more I studied control room operators, the more I learned they are far from environment’s enemy. Turn to three neglected storylines based on the really-existing practices of reliability professionals in highly complex socio-technical systems:

Practice 1: Bring ecologists, biologists & renewable energy specialists directly onto the floor of the infrastructure control rooms. This is already being done, but not to the extent it is now possible. Why? Because if environmental specialists cannot now reliably advise on real-time infrastructure operations (whose services, like large water supplies, are founded upon ecosystem processes), why would we believe that those promising, say, a Sustainable 2035 will know how to do so before 2035, although the real-time details elude them now? If in the real world we are only reliable as the next case ahead of being reliable, why ever is doing well in the next case any less important than for the ad seriatim catastrophes ahead?

Practice 2: Redefine system boundaries. Wetlands have been an iconic ecosystem in ecologists’ stories. Yet wetlands serve as “ecoinfrastructures” in other large system definitions. Those that moderate the effects of wind and waves on the adjacent levee structures are part of the levee system definition just as the levees provide an ecosystem service by protecting these wetlands in other adverse events.

In a storm, a single stretch of road may become an essential part of repair access for electricity lines as well as the means of access for levee floodfighting crews. In this case, the stretch of roadway becomes part of the emergency response of two infrastructures. A roadway between wildlands and the other side of the road’s the electricity distribution lines can serve as a firebreak in the emergency response system for the approaching wildland fire.

From this standpoint, it need not be agricultural versus urban versus environmental. From one perspective, it looks like three separate systems in competition with each other: a forest next to grazing land next to arable fields, no one of which can expand without loss to the other. From a perspective that treats them as subsystems to one ecosystem, the grazing land serves as a firebreak between the forest and arable holdings.

So too the California Delta can be seen not just as its own system but also as a buffer against encroaching urbanization from the east (Sacramento and Stockton) and west (San Francisco Bay Area), much as agriculture in South Florida and Western Netherlands have buffered against urbanization moving into the region’s “green” areas.

It follows that a key empirical issue is where that extra investment would produce the greatest positive impact on the ecosystem and landscape: planting trees and greenscapes in Sacramento or Stockton (urban ecosystems); reducing chemical agriculture on Delta islands (agricultural ecosystem); and/or constructing more wetlands around Delta islands (the environmental ecosystem).

Practice 3: Act on the full implications of the infrastructure control room as a key institutional & organizational formation for ensuring the high reliability mandate of improved ecosystem services and processes. Control rooms in large critical infrastructures are one of the few institutional formations that have evolved over time and across multiple contexts to promote high reliability repetitively in the management of complex socio-technical systems.

The implications are considerable. We keep hearing that global problems must have global solutions. If true, those solutions will never be highly reliable at that scale. There is, for example, no global water infrastructure nor a cadre of its real-time managers in the foreseeable future.

All of which explains why the shift away from global climate change models to regional ones is so significant. (We’ve embarked on doing so in California.) It is far more plausible to imagine water and energy control rooms coordinating at the regional level than globally, when it comes to collaborating.

2. Where the key point holds—our models and narratives must become more granular with respect to time and scale for the systems—then we also have a way of recasting the debate in ecosystem management and restoration. In so doing, we identify another source of future environmental narratives--and one more fitting with global and regional complexity.

Two ideal types, the carvers and the molders, dominate narratives about ecosystem conservation and restoration. As idealized, carvers see their task is to release the true ecosystem from the surplusage around it. Chip away overpopulation, chisel off the built environment, get rid of the non-natives species and eliminate pollution—only then does the ecosystem as it was meant to be have a chance of being disclosed and sustained. In the carving orientation, the ecosystem manager or restorer assumes the landscape has within its remit the good form and function created for it as nature, not by us.

The second ideal type are ecosystem managers and restorers who see themselves essentially as modelers of clay (sometimes, literally). They mold the landscape by trying to press onto it contemporary versions of complexities it once had. Here there is no prospect of repristinating nature. Ecosystems have to be designed and maintained, albeit their complexity may be little like the pre-disturbance or pre-settlement states. (Indeed, the grievance that ecosystems are continually degraded signals landscapes are moldable.)

–Now comes the important part. Unsurprisingly, really-existing ecosystem managers and restorers have fallen somewhere between these two textbook orientations—they’re ideal types after all—making due with what’s at hand and with what is possible. What is clearer now, however, is that this good-enough improviser is itself a third ideal type for ecosystem management and restoration.

Improvisation has its own idealized and practical benchmarks and practices. You see this, most prominently, where cities are discussed as “urban ecosystems.” Cities are highly differentiated systems with their own improvised sets of species and processes that have in some cases considerably more biodiversity than commonly supposed.

From this perspective, not only will there be multiple benchmarks (which actual improvisation inevitably falls short of ideal improvisation), but the scenarios of success or failure (actually, effectiveness) will also be with respect to different real-time uncertainties than those that perplex carver and molder. We should expect from this crucible of granularity will come new, more case-specific environmental narratives.

–What might the case-specific narratives look like and why would they matter? The widely-identified pollution in China has been credited in significant part to its coal-powered electricity plants and other hazardous facilities. That may be true as far as it goes, but here this point needs to be pushed further.

I, for one, want to know more about the real-time conditions under which middle-level operators and managers in China are operating these large-scale infrastructures. Are the reliability professionals not there or are they there but operating under ever more prolonged “just-for-now” conditions waiting from more options and better practices? We need to hear from Chinese scholars researching regional high reliability infrastructures (including its massively significant high-speed rail system).

3. Environmental policymakers and academics have always been a heterogenous group and they too are a source of new environmental narratives. One example will have to suffice, this one returning us to the importance of the region as the unit of environmental analysis and action.

An influential policy and management arena in the U.S. and abroad revolves around “environmental governance.” Here I focus on an early researched formulation. Delmas and Young (2009) present a simplified schematic for understanding environment governance in terms of multi-level interactions (local, regional, national, international) among three main “actors” (public sector, private sector, and civil society).

Delmas and Young plot some interventions into Figure 1, drawing from the case studies and associated literature reviews in their edited volume, Governance for the Environment: New Perspectives. For our purposes, note the environmental arenas where multiple spheres overlap, particularly those related to what has been called eco-labelling, placed at the center of Figure 1 (the shared area of the three intersecting sectors).

One chapter in the volume (Auld et al 2009) gives considerable attention to eco-labelling interventions in terms of third-party certification schemes that ensure goods and services are sustainably sourced. For example, we have programs that certify the produce is organically grown, the coffee is fair-trade, and the timber comes from forests sustainably managed. Such certification programs typically work on two fronts, first by incenting consumers to buy certified products, while discouraging them from purchasing non-certified products or services.

Now the new environmental narrative: Recast the role of eco-labelling. A major, persisting problem in the California Delta is deep concern over the reliability and safety of the levee (dike) system protecting island agricultural activities there.

Imagine a third-party program (i.e., some organization different from the vested interests of the US Army Corps of Engineers, California Department of Water Resources, and Delta-based reclamation districts) that certified whether or not any given Delta agricultural land (broadly writ to include livestock, aquaculture and non-traditional crops) was protected by levees that met a standard of high reliability in design and maintenance. Imagine consumers would be encouraged to buy “levee-certified” goods and services and discouraged from buying those that were not so certified. Imagine, in short, the same infrastructure element—the levee—but now having a different function than “keeping water out” only.

The wider buying public in California and beyond would be encouraged to purchase only those goods and services from adjacent country entities that had supported levee certification in and around the Delta water intake for the county (or with respect to any county in similar circumstances). In like fashion, the wider buying public would be discouraged in purchasing from those entities whose goods had been transported on the deepwater shipping channels passing through the Delta to Sacramento and Stockton, if those firms did not support levee improvements up to third-party certification standards along those shipping channels. In parallel, the wider buying public would be encouraged to buy agricultural products only from those Delta islands that had been levee certified and discouraged from buying that which was levee uncertified.

4. The fourth group from whom to expect new environmental narratives (not just regional but “big picture” ones) are practicing ecologists and environmentalist themselves. This group is more differentiated than given credit for in overarching terms such as “movement” and “discipline.” Indeed, that differentiation has been there from the beginning and therein lies its own under-acknowledged environmental narrative.

The term, “ecosystem,” comes to us through A.G. Tansley’s 1935 article, “The Use and Abuse of Vegetational Concepts and Terms.” He has been criticized for his role in colonial British ecology, but here Tansley is of salience for two different reasons.

–First, ecosystems for Tansley make no sense without taking humans and their interactions with the landscape into account. “We cannot confine ourselves to the so-called ‘natural’ entities and ignore the processes and expressions of vegetation now so abundantly provided us by the activities of man. Such a course is not scientifically sound, because scientific analysis must penetrate beneath the forms of the ‘natural’ entities, and it is not practically useful because ecology must be applied to conditions brought about by human activity,” he wrote.

This might seem to be pushing at an open door today, but Tansley deployed a discourse quite different than his contemporaries, the U.S. ecologists. Those were just-so stories about “climax communities” evolving on their own—if and only if devoid of human beings mucking things up. Two commentators on Tansley’s work (Laura Cameron and John Forrester, 2017) argue that his “principal contributions were, in contradistinction to American ecology, to emphasize the systemic interrelations of human activity and botanical phenomena—he sees no real difference between those ecosystems which are natural and those which are ‘anthropogenic’ (nature ‘produced by man’, as he glossed in 1923).” “A well-defined localized human community is the kernel of an ecosystem,” Tansley reiterated in an address to the British Ecological Society in 1939.

–But Tansley is important to us for another reason. Not only was he a founder of the British Ecological Society (the precursor to ecological societies in many countries) and the Nature Conservancy, he was also well-known and respected member of the British Psycho-Analytic Society, having been analyzed by Freud for nine months in 1922 and 1924. For Tansley, humans and their desires (“energy”) were and are never far away from ecosystems in the profoundest way.

Whatever the reader thinks of Tansley’s dated terminology, we see many ecologists today still take human desires as anything but The Enemy. Such, I’d like to think, would have appalled a Tansley who took desire and ecosystem to be inseparable. He’d be the last person, I suspect, surprised or shocked by large critical infrastructures, created to satisfy desires and wants, as having environmental impacts, bad and good.

–Where are the new environmental narratives in this? A major one, I believe, is sketched below.

The wider stream of thinking in ecology—”a well-defined localized human community is the kernel of an ecosystem”—suggests that human-dominated landscapes may well provide the only experience most living humans have of anything like the “presettlement template.” The antimony (settlement v presettlement) is no antimony in these cases: The human world provides some of the most enduring examples of “repristinated nature” we have ever had.

How so? The poet and essayist, Jane Hirschfield, writes: 

Hiddenness. . .is a sheltering enclosure – though one we stand some times outside of, at other times within. One of its homes is the Ryoan-ji rock garden in Kyoto: wherever in it a person stands, one of the fifteen rocks cannot be seen. The garden reminds us there is something unknowable is always present in life, just beyond what can be perceived or comprehended  – yet as real as any other rock amid the raked gravel.

What is being described is the inability of the observer to hold a stable focus on what is seen and unseen. For Hirschfield it is a rock garden in a city. For his part, John Berger, art critic, writes of another landscape: “The scale is. . .of a kind which offers no possibility of any focal centre. This means that it does not lend itself to being looked at. That is, there is no place to look at from. It surrounds you but never faces you.” 

Imagine here herders moving onto an empty, horizon-less plain; or night watchers looking up into the open, depthless annihilation beyond. But look they do and that look is there as much as what is seen without scale or end. “It should be like a river gorge with swans flying overhead; the river has no desire to retain the swan, yet the swan’s passage is traced out by its shadow without any omission,” or so Joseph Needham translates the Song scholar, Lin Jing-xi.

That very same experience–the emptied focus and scale when looking at the not-all-there—can be felt in different contexts of the human world. To assume or act otherwise is, I believe, to deny the narrative that something like “nature’s sublime” can be and is experienced in everyday life, and has always been.

5. Which leads to the fifth group to provide new environmental narratives—you, the reader. You get to cast and recast what to make of the exigencies and contingencies that befall you—and in radical ways when it comes to "environment."

Start with what many would consider unexceptional, a point made bu Adam Phillips, the psychoanalyst and essayist: “Given the obvious contingency of much of our lives—we do not in any meaningful sense intend or choose our birth, our parents, our bodies, our language, our culture, our thoughts, our dreams…and so on—it might be worth considering not only our relationships to ourselves and our relationships to objects, but (as the third of the pair, so to speak) our relationship to accidents”.

Fair enough, were it not for Agnes Heller, the philosopher, concluding exactly the opposite and because of the same contingencies:

In choosing themselves, men and women choose exactly what they are, as they are. They choose their best talents as much as their physical handicaps, they choose their parents, their childhood, their country, their historical age. They choose their poverty if they happen to be born poor, and their riches if they happen to be born rich. They choose their accidental features. That which they are by accident they become by choice.

Putting it that radically, Heller stirs us to ask in what sense is her point also true. But in positioning yourself somewhere between Phillips and Heller, you too become expert in recasting contingencies, exigencies and their environments. What recasted narratives are you handing down, now (the ancient Greeks call this paradidomena)? Or more bluntly: What’s next in these End Times is up to you.

Principal sources. This entry consolidates, edits and updates earlier blog entries: “Next-ism,” “To-do’s in the Anthropocene,” “Radical uncertainty and new environmental narratives,” “Eco-labelling recasted,” and “Nature.”

Yes, “risk and uncertainty” are socially constructed and historicized. Now what? The missing corollary and 3 examples

–We know the subjective is never far away from risk and uncertainty. Risk and uncertainty can also be historicized as formations of their time and place when you take the longer view. (Not only is your risk not mine; 19th century uncertainty looks very different from 21st century versions.)

That said, acknowledging the historical, social, cultural, economic…basis of our knowledge about risk and uncertainty has rarely gone far enough when it comes to policy and management.

–For, there is the corollary: Humans can only know—really know—that which they create. (Such is the insight of St. Augustine for philosophy, Giambattista Vico for history, Roy Bhaskar for science….) Humans know mathematics in a way they cannot know the universe, because the former is a thoroughly human creation about which more and more can be made to know. Their uncertainties are socially constructed in a way that, for lack of a better word, “unknowledge” about the universe is not.

This corollary means that to accept that “Risk and uncertainty are socially constructed concepts easily historicized” needs to be pushed further.

What is missing are the details of the connections among risk, uncertainty and associated terms that we make and the meanings we draw out for these connections, often under conditions of surprise.

Our creations are always surprising us and we seek to explain these occurrences by means of analogies that extend the range of what we call knowledge.

In case it needs saying, terms like “system,” “failure scenario,” “with respect to” and more discussed in the other blog entries on risk and uncertainty are also rooted in time and place. But to stop there, again, stops short of the wider point: That which we have created by way of risk and uncertainty—and continue to create—has become very complex. In fact: so complex as to continually provoke more thinking and more complexity-as-knowledge.

–Here by way of example are three such complexities about risk and uncertainty in policy and management that arise solely because of the connections made and meanings given by way of analogies:

• The focus on present risk and uncertainty in critical infrastructure—at the component or system levels—in an odd way volatilizes the infrastructure’s longer-term. It’s as if the preoccupation with current risk and uncertainty hammers the longer-term into current notions of risk and uncertainty as well.

This tenderizing of today’s meat we call “the longer-term” is not altogether unreasonable, of course: Longer terms are described as full of uncertainty and risk. But that is not the only we frame “longer-terms.” Think Ben Franklin: “So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for everything one has a mind to do.” Or: Intelligent people find it no problem to use their minds to subvert reason. In either view (two among many), it’s our Bayesian brains, not their “uncertainties,” that are the problem.

• Second, the methodological computation of “risk numbers” or “risk scores,” say, by asset category or for different components transforms infrastructure risk and uncertainty into a singular event—if you will, a single display of what is an ensemble of the heterogeneous and contingent (qualitative and quantitative information, by these and not those subject matter experts…), here but not elsewhere, and at this point in time and not another.

Put this way, risk rankings look like performance pieces in the arts. Over the course of the day, the chief risk manager serves as the curator of installation artifacts called risk scores for this or that part of the critical infrastructure—for facilities here, for pipelines there, for compressors all over the place. Each ranking and each of score is a one-off, akin to what Surrealists call frottage, producing impressions on a piece of paper by rubbing with a pencil or crayon—think: rubbing with a methodology—over an always uneven surface.

(Or if you don’t like the art analogy, think of risk scores akin to novel financial instruments, such a customized CDOs, each a one-off, so complex and heterogeneous they can’t be compared except in highly nominalized terms like price, i.e., only in terms of a “score”.)

• Third, each custom-made risk score or a ranking of scores ends up as an odd kind of abstraction. Risk is abstracted from real-time operations into risks associated with specific assets, components and processes that by definition do NOT add up to the infrastructure as a system operated and managed in real time. Once outside the precincts of real time and the system as system, the temporal and spatial are abstractly foreshortened and elongated—the system is spatially segmented into components and extended elastically as the focus of attention—for almost everything except that system as a system.

Since attention is a scarce resource, prolonged attention given to abstracted risks at the asset, component and process level diverts resources away from systemwide reliability and the distinct risks faced in real-time management there. This matters because that real-time management for systemwide reliability in its centralized control room is about the only place in the infrastructure we observed that didn’t lose sight of the infrastructure operations as an articulated whole.

A few things I learned from the Financial Times about regulation

Post-crisis banking re-regulation adds significant costs to the economy and thus reduces growth, while the pre-crisis light-touch regulation undermined the very financial infrastructure necessary for economic growth.

What were indicators of positive economic growth under light-touch regulation—rapid uptake in home mortgages—were indicators of regulatory failure later on. Indeed, mortgages were a relatively safe asset for banks to own, until they were the source of unimaginable losses.

Overregulation is nowhere better illustrated than in comparing the Dodd-Frank nearly 2000 pages of legislation to the less than 20 pages of the Depression’s Glass-Steagall Act—but under no circumstances are our regulators to repeat the 1930s! Whatever, those who lobby for simplifying regulation end up making it more complex.

It’s a bad thing for regulation to try to squeeze too much risk and complexity out of banking, especially when fresh risk reduction—less leverage, more capital reserves—is itself too risky a strategy. Regulation discourages risk taking and only with risk taking do we have innovation, except when too much innovation and risk taking are encouraged as in the deregulated finance sector up to the 2008 crisis.

New financial instruments, particularly derivatives, flowed to where they were not regulated, but regulated financial instruments always increase opportunities for perverse arbitrage and loopholes. If the last financial crisis showed anything it’s that we need systemic risk regulation and macro-prudential policies, but, to be sure, it’s individuals, like Alan Greenspan, who were to blame for things going terribly wrong.

Regulators must always have the best information, even when those regulated—the banks and investment firms—haven’t a clue as to their current real-time positions. Regulators will never have the capacity to know the real-time positions of actual firms, except in those cases where firms, like Lehman, insisted regulators did have the real-time information.

Global business and supply chains are great, except when the firms are too big to fail. Country defaults are horrible, except where they work through being regulated de jure as in Argentina or de facto as in Mexico.

Global markets are a permanent fact of life, but we must never suppose that the drive to regulate them for the better is just as permanent. Markets are best at price discovery, except where market efficiencies are realized because of lack of transparent discovery, as in unregulated dark pools.

In sum, what I’ve learned from the Financial Times is that capitalism is in crisis because of the shambolic understanding of regulation.

More on over-complexification

Two drivers of not-knowing, inexperience and difficulty are often conflated—information overload and cognitive undercomprehension—and the conflation increases the sense of more complexity in policy and management.

–Think of information overload as the “right” information lost in the glut of information before us. Cognitive undercomprehension, in contrast, is our cognitive limitation to recognize anything like “the right information.”

Overload means we would be high-performing analysts and managers if only we were to tease out the right information from all the noise obscuring it; undercomprehension means we are held to such high-performing standards we couldn’t possibly know the right information, even if it were visible before our very eyes. “I could do my job if only I had the right information” is not “No one could do the job I’m tasked with, whatever the information available.”

Over-complexification comes into play when remedies for one produce the other or complicate both.

–For example, making sense of the masses of Big Data requires algorithms no human beings on their own can comprehend. To that degree, what was information overload ceases to be that by triggering cognitive undercomprehension.

On the other hand, reducing high information overload can be associated with increasing cognitive comprehensibility, but here the costs of doing so may be too high. You reduce the complexity of your conceptual model in order to make it comprehensible, but in the process you’ve increased the chances your model is biased, i.e., that it differs from the correct one. “There are no 99 per cent probabilities in the real world. Very high and very low probabilities are artifices of models, and the probability that any model perfectly describes the world is much less than one,” underscores UK economist, John Kay.

–There is also the problem of society demanding both information overload and cognitive undercomprehension at the same time, however unintentionally. A common enough observation is that when the task is to surveil, as it is for our regulators of record, they will always want more information, no matter how much they already have. In this way, the regulators suffer the double-whammy of information overload and cognitive undercomprehension: They have more information for use but not enough cognitive capacity and skill to extend their limits of cognition on using it.

–Many upshots follow on information overload and cognitive undercomprehension occurring together. Two obvious ones deserve more highlight here.

First, at or beyond the limits of cognition, not only is prediction and forecasting difficult, so too is identifying the counterfactual conditions, not least of which is what would happen if overload and undercomprehension were assuaged. We become very much like amateurs in all this.

Second, arguments presented to us as policy relevant solely because of their diamond-sharp clarity rarely get beyond the joke stage. The usual criticism—policy and politics have become an endless stream of stark media images competing for scarce attention—is true, but even so: The joke is that the stream of photo-clarity means more and more murk, both as information overload and cognitive undercomprehension.

The third implication follows on from the preceding two: Don’t give in to the temptation to simplify, even here as amateurs for a topic this complex.

When an experienced county emergency manager told a group of us, “Floods are complex events, they have many variables,” it wasn’t helpful to tell him, as some did, he’d be much better off first simplifying those events for the purpose of modeling and simulation. To assume he needed to understand the flooding better ignored that he was already managing the complexity there.

A much more effective starting point, it seemed to me then as now, would be to identify professionals who are themselves already managers of complex risks and uncertainties, such as this county emergency manager, and then ask how can we help them, if at all.

Related blog entries on over-complexifying: “Even if what you say is true as far as it goes, it doesn’t go far enough…,” and “Public Policy Analysis, c.1970 – c.2020: In Memoriam?”

When policy is magical thinking


Long-lived debates in the policy and management with which I am familiar have been fought at the extremes: Market versus Hierarchy; Hierarchy versus Coordination; Coordination versus Regulation; Regulation versus Innovation; Innovation versus Politics; Politics versus, well, every other abstraction from Science and Technology.

We are still told that, when it comes to high reliability of society’s critical infrastructures, macro-design should trump micro-behavior (i.e., operator error); alternatively, micro-behavior must drive macro-design (i.e., self-organizing, complexly adaptive systems). If only we designed efficient energy markets, the grid would basically take care of itself; if only we had real-time metering in every household and business, the grid would basically take care of itself; if only we distributed multi-agent software to self-heal the grid, the grid would basically take care of itself.

If only we had full cost pricing, or sufficient political will, or had publics that could handle Arrow’s voting paradox, then everything would be okay. It’s alright to leap from macro to micro and back again, and that’s a promise! If only we took short-cuts to reliability and got rid of all that mess in between, we’d be better off.

Which way Africa: Kenyatta or Nyerere? Which way Latin America: Structural Adjustment or Basic Human Needs? Which way the world: Globalization or [fill in today’s blank]? We might as well talk about who is more likely to be in a Christian heaven, Plato with his soul or Socrates for his self-sacrifice.


Policy as practiced has much to do with the conjuror’s misdirection. A policy directs your attention to one area while the real action happens elsewhere. You focus on the hand of the policymaker when the other hand of middle-level managers and professionals ensures rabbits and hats go together. Sadly, to focus on the spectacle and not the handiwork is the real “policy blunder.”

As a newly-minted policy analyst, I was told we had first to nail down the politics. Without correct political arrangements, how can we have a society? I remember vividly times when I was assured that change political institutions and human behavior changes accordingly.

Further along we were told, actually, it’s all about economics. Here too I remember being assured “After all, you can’t repeal the business cycle!” and assured that with the correct macroeconomic and microeconomic arrangements in place, politics change, and for the better.

We policy analysts then came to know that, actually, it’s about getting the science down. Dummy, it’s politics and economics that have gotten us into this mess and will keep us there, unless we start taking science seriously!

And yet. . .the very same misdirection continues.

Farms continue to get their subsidies—be it because agriculture is politically important, food is economically important, carbon sequestration is important, global politics is reallyReally more important without which there will be no earth, no climate, no food, no agriculture, no subsidies worth speaking of. We could as well believe philosopher Kant’s early musings about how the collapse of the universe—yes, the entire universe—can be brought about by “even the slightest disarrangement” here.


–Many of the people I work with believe that, when it comes to saving the earth, the real problems reduce to politics, dollars and jerks. More, the reductionists believe that the only way this is going to change—“reallyReally change”—is via a disaster that wakes up everyone to the magnitude of the crisis. It takes a disaster to get people to prevent disaster.

–The magic here becomes obvious when you press: “What kind of disaster are you talking about?” It turns out they want the Goldilocks scenario—not too weak an earthquake or no one will do anything, not too strong a disruption or nothing can be done, but an earthquake (storm, fire, epidemic, whatever) just big enough to shake everyone up—but not to kill too many of them!—so that the “they-them” do the right thing, be it through better science, politics, economics, government, society, culture, people, ethics…


Would you believe that installing the wheel closer to the engine gets you to your destination sooner? And yet…

How many times have we heard or said something like, “If implemented as planned…,”“If done right…,” or “Given market-clearing prices…,” only to demonstrate we ourselves are the delusional ones? Each asks to do a dry-run on a spell.

‘‘If implemented as planned,’’ when we know that is the assumption we cannot make. ‘‘If done right,’’ when we know that “technically right” is unethical without specifying just what the ethics are, case by case. ‘‘Given market-clearing prices,’’ when we know not only that markets in the real world often do not clear (supply and demand do not equate at a single price)—and even when they do, their ‘‘efficiencies’’ can undermine the very markets that produce those prices.

Admit it: All these “givens” end up little more than the magical thinking of a primitive people. We could as well believe that the surest way to heat the house in winter is by striking a match under the porch thermometer.

Why aren’t they all running away!

for Robert Chambers

For reasons that will be obvious, no names or references are given in what follows. The numbers, however, remain roughly as first identified.

–Researchers estimated the annual probability of a major stretch of island levees failing ranged somewhere between 4% to 24% due to a slope failure. (Slope instability in this scenario would be caused by flooding behind the levee as well as high water levels on its water side.)

Our estimates were considerably higher than the official one, in large part because the research project relied on better validated methodologies for accommodating uncertainties.

–We presented the findings to the island’s management board. Their first and really only question was whether our estimates would be revealed to island insurers.

–We had a “hotwash” afterwards to figure out their—how to put it?—underwhelming response:

  • Didn’t they understand the upper range, 24% per annum, implied a levee breach was nigh inevitable with respect to our slope instability scenario? Or to put the question to our side, in what ways did the 24% per annum estimate fall far, far short of being a failure probability of 1.0?
  • But if as high as 24% per annum, why hadn’t there been a levee breach over the many decades since the last major one there?
  • What about the other islands nearby? Assuming even only a few of these had a similar upper range, why weren’t levee failures happening more often in the same watershed and variable flooding conditions there?
  • The 4% – 24% range was with respect to annual levee failure due to slope instability only. If you add in all the levee failure modes possible there (e.g., due to seepage rather than overtopping and flooding), the total probability of levee failure would have to be higher. So, even if all levee fragilities were at the lower end of each failure mode’s range, this was scarcely reassuring. (But then again, what are the conditions under which the more ways there are to fail, the more likely failure is?)
  • One answer to why levee failure there hadn’t happened—yet—was there hadn’t been a long enough period to observe levee breaches so as to form the distribution from which the 24% could be established empirically. But these levees, and worse ones on nearby islands, had been in place for decades and decades; some had been improved in fact. The burden of proof, in other words, was on us, the team of levee experts, to explain why this wasn’t “long enough” or what that long-enough would have to be.
  • The absence of actual levee failures at this island specifically could be more complicated than we first thought. The levee stretch in question could be “failing to fail.” It might be that this stretch had not undergone events that loaded them to capacity and worse. (But that again the question begging: How much worse would the conditions have to be in our expert view? Just what is a probability of failing to fail?)
  • Even though our team had used multiple methods to triangulate on levee fragility and even though the others methods enabled us to incorporate uncertainties better, these methods still may have fallen well short of the needful. This suggests spending more resources on reducing the uncertainties imported into risk estimates or otherwise ignored.
  • Or to put the preceding point differently, was this levee stretch on that island more diverse and more resilient (say, in the way biodiverse ecosystems are said to be more resilient) than current methods capture but which islanders there better understand and manage?

–But our most significant observation was the one none of us saw need to voice: How could we accuse the management board and islanders of being short-sighted or worse, with so much else going on challenging us, the team, to make sense of such estimates?

In the profoundest sense possible, their not-fleeing the island was our problem, not theirs.

Market contagion, financial crises and a Girardian economics (longer read)

–That people act in an imitative fashion under conditions of high economic uncertainty is not news: Panic selling, spiraling inflation, overheated art markets, and speculative frenzies (I sell when you sell, buy when you buy) are some of the many instances of imitative economic behavior.[1] 

What’s bothered me, though, is the relative lack of reference in the economic literature to René Girard’s theory of mimetic contagion (mimetic desire in his terms). Why? Because Girard’s framework has major implications not identified by economists writing on market contagion and associated crises.

Brief description of a Girardian economics

From a purely Girardian perspective, financial and economic uncertainty begets ever more uncertainty, as more and more people imitate each other in a desperate rush to figure out what to do. At some point, classes of people are arbitrarily identified (scapegoated in Girard’s terms) as the cause of the crisis, widespread violence ensues against them, and new financial and economic institutions emerge from the hostile, violent conditions.

Most economic contagion models do not go that far in predicting violence (to be clear, predicting does not mean advocating).[2]

Contagion modelers typically argue that the way to break the cycle of imitation is the availability of and accessibility to more accurate information. Thorough-going Girardians will have none of that. They insist the underlying and overwhelming problem is pervasive uncertainty for which there is no recourse to “certainty” to solve. Appeals to “market fundamentals” or “getting back to normal” stabilize temporarily, and that is at best only. Such appeals do not and cannot resolve the baseline uncertainty that corrodes each and every stabilization effort.

In a Girardian economics, the more uncertain things are, the more wealth we desire to buffer against that uncertainty; but the more wealth we have, the more desire we have for even more wealth. The specification of wealth itself becomes increasingly problematic as uncertainty persists. Once wealth “ceases to be identified with the instituted money, [economic agents] no longer know behind which mask it is hiding.  Stocks, real estate, gold, foreign currencies, primary commodities, etc. attract the anxious attention of individuals looking for likely refuges from the ‘terrible oscillations of chance’,” the economist, André Orléan, has written.

This leads to what Girard calls a crisis of undifferentiation. Uncertainty becomes everywhere intensified; economic behavior grows more and more uniform; and ever more wealth becomes desired as “what is wealth?” becomes increasingly difficult to answer. Markets undergoing crises of undifferentiation—epidemics of contagion where everyone ends up imitating each other—are instances where we do not know enough to distinguish, in econo-speak, satisficing from maximizing or the second-best from the Pareto-optimal, and where no one is clearly right but where everyone hopes they are.

Girardian features of the 2008 financial crisis

This resulting sense of free-fall and groundlessness is neatly captured in the comments of bankers and investors just before and during the 2008 collapse of Lehman Brothers. “It feels as if we are 15 minutes away from the end of the world,” the head of equities at a large U.K. bank told the Financial Times about the lead up to the first major U.S. bailout.

“The market has changed more in the past 10 days than it had in the previous 70 years,” reports a senior executive at a European investment bank in 2008. “We have no idea of the details of our derivative exposures and neither do you,” conceded a senior Lehman Brothers official at a meeting of bankers and regulators just before it collapsed. “The crisis continues because nobody knows what anything is worth,” said one informed observer. The chair of Morgan Stanley Asia concluded, “We have gone to the edge of an abyss that few thought was ever possible”. I can find no reports of financial experts appealing to “underlying” market fundamentals during these weeks.

According to Girardians, people under these conditions—these crises of undifferentiation—respond by scapegoating. Scapegoating provides the certainty to move on. Reports from and about the last quarter of 2008, with the collapse of Lehman Brothers, the bailout of Freddie Mac and Fannie Mae, and the further bailouts of AIG and Citigroup, were replete with terms such as “panic,” “herd instinct,” “mob mentality,” “mob rule,” “witch hunting,” “finger-pointing,” “lynching,” and “show trials” along with the ubiquitous referencing of “scapegoats” and “scapegoating” (all terms from contemporaneous reports in the Financial Times).

Many commentators, of course, believed they were in fact correct in their blaming this one or that one for the crisis. Girardians argue, in contrast, that the choice of scapegoat is completely arbitrary, where pervasive uncertainty drives economic behavior. We saw some of this arbitrariness in the belief that if no one is to blame, then every one shares the blame. At that time, we were, for instance, told with respect to the financial crisis that “there is enough blame to go around for every one” and “we are all to blame for the meltdown.”

In heated financial markets where everyone is buying or selling at time t+1 because, well, most everyone was buying or selling at time t, there is no way to validate that selling save by stating it is what everyone else was and is doing. This point was famously made by Chuck Prince, former head of Citigroup, when he told the Financial Times in mid-2007, “When the music stops, in terms of liquidity, things will be complicated. But as long as the music is playing, you’ve got to get up and dance. We’re still dancing.”

But where’s the blood?

Its focus on an ensuing violence, however, is what sets a thorough-going Girardian economics apart from other contagion models. A full-blown Girardian economics, at least as I understand it, would hold that imitative behavior goes beyond scapegoating. It turns into mob behavior, not as a reporter’s hyperbole but in actuality. People scapegoated are killed, and it is only after widespread violence that people respond in revulsion to their behavior by forging new social and economic conventions so that such violence “never happen again.” In this view, new economic and financial institutions arise only after panic and mob-like behavior and the post-hoc rationalizations for what all the preceding “really” meant.

We certainly heard calls from politicians and regulators alike for “never again,” when it came to the 2008 financial crisis. So too, a manager or two committed suicide or disappeared from the scene. But it is an odd sort of crisis when those harmed on such an unprecedented scale did not take screaming to the streets. In 2008, we witnessed food riots over crop prices but no real violence over this massive wealth destruction. Which prompts the question:As René Girard asked me when I presented my version of a Girardian economics at his Stanford seminar, “Where’s the blood?”

I suppose some of it is there if we look for it. Already well documented, murder and suicides and violence do go up during a severe economic downturn like the one to which this financial crisis led. This, however, scarcely qualifies in Girardian economics as mob behavior essential for the rise of new social convention and institutions governing finance and economics.

So what happened?

In July 2009, former Treasury Secretary Henry Paulson testified before Congress on his involvement in the financial crisis. He admitted he had been deeply concerned about frightening the public if he expressed his real fears about the financial system unraveling: “[W]hen a financial system breaks down, the kinds of numbers that we were looking at in terms of unemployment was [sic] much greater than the numbers we’re looking at now. People in the streets, and of course, around the world—it was very significant and I remember talking about it…”.

But people did not take to the streets. Why?

Girardians, as I understand them, would resist two popular “answers:” (1) government interventions worked, and/or markets went back to fundamentals; and (2) it is too early to say how things are working out. Girardians would have expected considerable violence during and after events of September/October 2008, and there is no chance, in their view, that such reforms to the financial system as there were would ever make things more “certain” in the absence scapegoating and ensuing violence.

My answer

There are at least four ways in which a crisis of economic undifferentiation could be delayed, albeit not averted, when comes to market contagion. More, if I understand Girardians, these four ways are the value added to contagion models of the financial crisis already proposed by mainstream economists:

If you can’t reduce pervasive uncertainty, the next best alternative is to impede the resulting rivalry (“increase the costs of rivalry”);

If you can’t reduce the rivalry, the next best alternative is to impede the associated imitative behavior (“increase the costs of imitation”);

If you can’t reduce the rivalry or imitation, the next best alternative is to foster and prolong differentiation (“decrease the costs of differentiation”); and

Lastly, if you cannot do any of the above, the alternative is to slow down or wait out the crisis of undifferentiation (“increase the costs of undifferentiation”).

These actions are, I believe, what have been happening by way of the financial and economic reforms undertaken since 2008. Their effect has been to delay the consequences of the financial crisis by sidelining the scapegoating. Let’s examine each in more detail:

Increase the costs of rivalry.

In the Girardian framework, markets are mechanisms to increase the transaction costs associated with rivalry, not decrease them, as conventional economic theory would have it.

Markets are what keep us from killing each other for the goods and services we desire. They transform us into price takers rather than commodity thieves. What happened in the lead up to the 2008 financial crisis was just such increased thievery (e.g., insider trading and predatory lending). Many existing and proposed reforms—most notably, increasing capital adequacy reserves in banks and lending institutions—have been intended to make the excesses of rivalry too costly to undertake.

But increasing the costs to rivalry poses a dilemma from a Girardian perspective. To increase their costs may lessen that rivalry, but the higher costs serve as an incentive for increasing the wealth needed to cover (buffer against) the now-higher costs associated with rivalry. For Girardians, it is no surprise that firms, such as Goldman Sachs and JPMorgan Chase, were driven to return to wealth-making faster than would have been expected given the economic conditions and liquidity shortages said to exist at the time.

Increase the costs of imitation.

The principal feature of the lead-up to the 2008 financial crisis was that costs of imitation were too low. Behavior, as many pointed out, became positively correlated, when finance theory insisted such behavior should have been uncorrelated through risk dispersion.

Instead of diversification and risk spreading, hedge funds and others ended up acting in very similar ways. Either “[t]oo many funds bought the same assets” or the “problem was that, while these assets are heterogeneous, the owners were not. In tough times they behaved the same way….Diversification was therefore fake”. “Far from promoting ‘dispersion’ or ‘diversification’ [financial] innovation has ended up producing concentrations of risk, plagued with deadly correlations,” according to a Financial Times’ correspondent at the time.

Calls for “increased transparency” are routinely given as the solution to this problem. Risk cannot be concealed or obscured if financial processes are transparent, so this argument runs. From a Girardian perspective, such calls are self-defeating. Greater transparency would reveal the financial system is transparently complex and in many ways visibly beyond human comprehension when it comes to measurable risk and unmeasurable uncertainty. At worst, everyone sees the system for what it is, a house of cards impossible to shrink through “better risk management” or shrink-wrap with “better macro-prudential regulation.”

Either way, calls for greater transparency would lead people to becoming even more rivalrous as they hunt for ever greater wealth to protect or buffer themselves.

Decrease the costs of differentiation.

Now things get really interesting. You saw everywhere in the 2008 financial crisis the insistence of major participants that each differed from the others and that they were not—repeat, not—all alike.

Hedge funds insisted they did not start the financial crisis but that banks and investment houses did; the latter institutions insisted they were not all the same, some were better (or accused of being worse) in managing securitized assets; not all securitized assets were the same—that is, all toxic; more, not all toxic assets were equally valueless; still others argued that it depended on the valuation procedure used and few agreed which was the better one; no over-arching agreement, moreover, because the regulators themselves did not agree….; and so on.

Against a Girardian background, this sustained insistence on differentiation, even as finance and banking were in the midst of uncertainty, is especially important to note. Circumstances remained, at least in the minds of the finance sector, differentiated in major forms before and through the crisis. Very different social conventions emerged with respect to financial transactions, and the conventions evolved and innovated at that time as they diffused through institutions and among their participants. While accusations of “You’re all the same!” reached near fever-pitch, banking and finance services were still far from being homogenous and uniform, even during the crisis and the Great Recession that followed.

In other words, the blame game remained cheap throughout the 2008 crisis: The costs of differentiation were lower than one would have expected in a full-blown crisis of undifferentiation. I return to this point in a moment.

Increase the costs of undifferentiation.

A last strategy is to wait out the financial collapse in the hope that the longer people hold out before the crisis of undifferentiation becomes total, the more likely undifferentiation will not be total nor the contagion completed in full-blown scapegoating. One way to make undifferentiation “cost more” is to fuel the rumor mill about the who, why, how, when and where of the financial crisis, since it takes time to settle a rumor. (Small beer, but beer nonetheless.)

Since 2008, we have had an incoming tide of books and publications that keep all manner of whodunit suspicions and fevers alive. Rather than narrowing down identification of those who are “really” to blame, we have a surfeit of candidates said to have caused or contributed. In fact so many that some take the 2008 financial crisis to have been overdetermined. Instead of knowing who is to blame, we are encouraged to conclude, “With all that was going on, it would have been a miracle if the financial crisis didn’t happen!” Error here has many fathers when reliability is orphaned.

–In short, scapegoating has become difficult to complete during and since 2008, thereby diffusing the prospect of violence and the rise of new financial institutions: . . . so far, Girardians underline. Yes, scapegoating has begun, some violence has been witnessed, but there has yet to be polarization around one scapegoat or defined set of them. Or from the other direction, what polarized agreement that has occurred has been more around phenomena—notably, rising inequality—than on specific groups or classes of agents.

Yet even if the financial crisis were not the one predicted by a purely Girardian economics—how could it be a crisis of undifferentiation and scapegoating without the violence?—it is remarkable how well the four types of interventions just described fit the course of events as we know them today.

Which raises a last caveat. The 2008 financial crisis is a handy case study, not only because it’s part of living memory, but also because the literature that followed helps track the aftermath in terms of the four strategies discussed.

A more pertinent, policy-relevant case study would be the COVID-19 pandemic, one that tracks not only the spread of the epidemiological contagion, but the associated spread of mimetic, imitative reponse over space and time. It’s clear that wearing masks and social distancing have become core differentiation strategies at least up to the time of writing (August 2020), but we’re in medias res here too.


[1] Conlisk (1980) wrote about the widespread importance of imitation in economic behavior. Topol (1991) focused explicitly on mimetic contagion in investment behavior. Scharfstein and Stein (1990) and Banerjee (1992, 1993) modeled herd behavior among investors. The critical-mass (“tipping”) models of Schelling (1978) and Akerlof (1984), as well as the “informational cascades” model of fads and cultural change developed by Bikhchandani, Hirshleifer and Welch (1992), captured the notion that, under uncertainty, economic agents end up copying each other’s behavior. Most famously, Nobel Laureate in Economic Sciences, Robert Shiller (e.g., Shiller and Pound 1989; Shiller 1989, 2006) writes about and focus on contagion models in investment and the strategic role of imitation among investors. He argues, for example, that the subprime mortgage crisis and the 2008 financial crisis that followed had a great deal to do with “the contagion of market psychology” that led to bubbles under the boom conditions of the turn of the century (Shiller 2008). More recently, Shiller (2019) has focused on the role of narratives in the spread of and response to market contagion and crises.

[2] Not all economists who rely on the Girardian framework focus on violence as the instigator of new economic arrangements. Scholars such as Jean-Pierre Dupuy, Mark Anspach, Paul Dumouchel, and André Orléan, among  others, have applied aspects of Girard’s contagion model to economics and related topics. In my view, the most notable application is that of economist, André Orléan, in his The Empire of Value: A New Foundation for Economics (translated by M.B. DeBevoise, 2014, The MIT Press: Cambridge, MA.). Violence is not a key feature of his analysis of money and the 2008 financial crisis in that book. (See also Orléan 1988, 1989, 1992a,b, 1998.)


Akerlof, G., 1984. A theory of social custom, of which unemployment may be one consequence. In An Economic Theorist’s Book of Tales. Cambridge University Press, Cambridge.

Banerjee, A., 1992. A simple model of herd behavior. Quarterly Journal of Economics 107, 797-817.

————— 1993. The economics of rumours. Review of Economic Studies 60, 309-327

Bikhchandani, S., D. Hirshleifer and I. Welch, 1992. A theory of fads, fashion, custom, and cultural change as informational cascades. Journal of Political Economy 100, 992-1026.

Conlisk, J., 1980. Costly optimizers versus cheap imitators. Journal of Economic Behavior and Organization 1, 275-293.

Orléan, A., 1988. Money and mimetic speculation. In P. Dumouchel, editor. Violence and Truth. Stanford University Press. Stanford, CA.

————, 1989. Mimetic contagion and speculative bubbles. Theory and Decision 27, 63-92.

————, 1992a. The origin of money. In F. Varela and J-P Dupuy, eds. Understanding Origins. Kluwer Academic Publishers. Netherlands.

———— (co-authored with Robert Boyer), 1992b. How do conventions evolve? Journal of Evolutionary Economics 2, 165-177.

———–, 1998. Informational influences and the ambivalence of imitation. In: J. Lesourne and A. Orléan (Eds.) Advances in Self-Organization and Evolutionary Economics. Economica: London.

Roe, E., 1996. Sustainable development and Girardian Economics. Ecological Economics 16, 87-93. The article is the principal source for this blog entry, though material from the original has been updated substantially.

Scharfstein, D. and J. Stein, 1990. Herd behavior and investment. The American Economic Review 80, 465-479.

Schelling, T., 1978. Thermostats, lemons, and other families of models. In Micromotives and Macrobehavior. W.W. Norton and Company, NY.

Shiller. R., 1989. Stock prices and social dynamics. Fashions, fads, and bubbles in financial markets. In Market Volatility. The MIT Press, Cambridge, MA

————, 2006. Irrational Exuberance. 2nd Edition, Paperback, Broadway Business.

————, 2008. The Subprime Solution:  Today’s Global Financial Crisis Happened, and What to Do about It. Princeton University Press: Princeton, NJ.

————, 2019. Narrative Economics: How Stories Go Viral & Drive Major Economic Events. Princeton University Press: Princeton, NJ.

Shiller, R. and J. Pound, 1989. Survey evidence on diffusion of interest and information among investors. Journal of Economic Behavior and Organization 1,: 47-66.

Topol, R., 1991. Bubbles and volatility of stock prices: Effect of mimetic contagion. The Economic Journal 101, 786-800.

Culling sustainability

–Readers advocating sustainability are familiar with a policy narrative that runs roughly as follows:

Sustainable land uses—e.g., hunting and gathering or, later, traditional pastoralist systems of mobile (“nomadic”) herders and livestock —have been more beneficial to the environment than are today’s large socio-technical systems, which have exploited and degraded that environment. Dams and hydropower have caused irreversible damage and have long displaced the earlier, more sustainable uses. Pastoralist herding systems, for their part, continue to be edged out of existence by encroachments that are themselves unsustainable.

Making matters worse, those electric grids, water supplies, and transportation systems are preoccupied with real-time operations to the detriment of longer-term sustainability. As these and associated “developments” have spread and circulated throughout the arid and semi-arid lands, the effects on and damage to dryland ecology have been acute and pernicious.

What to do? Minimally, we have to institute and abide by sustainability principles, criteria and indicators. While it is no longer possible to restore much of the landscape to its pre-disturbance state, we must ensure indicators are in place to tell us how fast we are moving away from or back towards sustainability.

–How to assess and evaluate this policy narrative?

The obvious strategy has been to criticize it, point by point. Just what do you mean by “traditional”? Why aren’t overgrazing and overstocking identified as unsustainable features of so-called traditional livestock systems? Are there no cases where large water supplies or electricity or improved transportation have helped rather than harmed the semi-arid and arid lands? More, why ever in 2020 are you focusing primarily on dryland ecology? And anyway, just what do you mean by “sustainability” and where is “context” in all of this?

The questions are easily augmented, with—you’d assume—narrative death following in due course through a thousand such stings. But these narratives don’t drop dead the way some of us hope. We think we’ve amputated their legs, and they still walk the earth

— I suggest there is at least one more way—and more useful than criticism on its own—to evaluate that dominant narrative: parsing it through sustainability narratives that already exist.

The aim in this case is seize on already-identified sustainability narratives that enable you to identify and focus on the weakest links in the dominant narrative. In contrast to a full-blown, point-by-point critique, you want a narrative that more clearly shows not only what is wrong (more than less) with the dominant narrative, but also how to proceed ahead instead.

Let me illustrate what I mean.

–Those who read the sustainability literature have also, I believe, come across such statements as:

. . .So, while sustainability has been shown to be a key existential issue, less acknowledged has been the fact that many sustainability indicators currently mis-specify the system to be sustained. . .

When I read such a statement, I mentally cut it out of its surrounding text. This way I create moments not just to guess what the author is going say next (that last “. . .”), but how I would fill in a text before and afterwards now that it’s opened to my own recasting.

–On reflection, statements like that just italicized remind me that indicators are always indicators with-respect-to-something. Asking with-respect-to-what? forces me back into the author’s text to search for just what is the specific system or system behavior that deserves monitoring in the author’s view.

This is important, because the sustainability literature I’ve read includes far too many instances where the indicators recommended have no such specifics or priorities. Instead, having an indicator for each thing that might matter has become the mirror reflection of critiquing every point equally.

–So what?

Let me cut to the quick by continuing to fill in the last set of ellipses my own way:

. . .currently mis-specify the system to be sustained. This mis-specification of sustainability indicators occurs along many avenues. Most important, indicators must always have bandwidths when it comes to high reliability performance at the system level.

By bandwidths, the reliability literature means upper and lower ranges of, or limits on, actual group behavior, the breaching of which triggers adjustment responses among the group. In this way, normal operations at the system level are not static but fluctuate within tolerance levels.

Or, if you prefer, resilience without bandwidths isn’t resilience, and that resilience—fluctuations within bandwidths and adjustments back when breaching bandwidths—is the starting point for working out sustainability under mandates of high reliability. Most important, “adaptive capacity” or “flexibility,” to the extent they are unbounded or left unconstrained, do not capture this key bandwidth feature of resilience. . .

In contrast, note how a good number of sustainability advocates would conclude that earlier phrase, “currently mis-specify the system to be sustained” with some variant of “. . .which today is global if not planetary,” thereby begging the question of the extent to which the bounded versus unbounded features to be monitored determine the scale rather than the reverse, which presumes scale determines the appropriate indicators.

–This isn’t the place to argue the merits of any such alternative reading, The bigger point here is that instead of treating the dominant pastoralism narrative—and let’s be honest, in a world of this many pastoralists, versions of the dominant policy narrative as well as counternarratives must be expected—as if its/their every assumption could be major and all major assumptions require major attention, we are doing something very different by means of the italicized rewrite.

We’re relying instead on already-existing sustainability narratives to hone into what primarily matters, at least from their perspectives. In our case, what matters as a priority is better thinking through measures (qualitative/quantitative, broadly writ) of acceptable group behavior under conditions of high uncertainty across multiple scales.

–Which then raises the question: How do I know I’ve identified the “right” sustainability narrative from the many out there with which to parse the dominant narrative?

But that is the wrong question. There is no right choice; there are only more or less useful ones. Or probably closer to the truth: There is no right choice, but many wrong ones—and understanding the way in which wrong is how wrong is more useful than the call to get it right.

By way of example, it’s more useful for me with respect to my policy and management perspective, if at a practical level, I ask: Just what are the bandwidths associated with, say, place and time in the herding itinerary?

I’m thinking here of the very useful insight of Saverio Krätli that pastoralism is a “livestock based livelihood/production system specialised in taking advantage of variability and centered on managing grazing itineraries at a variety of scale”. Does that scale, presumably both spatial and temporal, imply bandwidths of a particular kind?[1]

More, if such bandwidths do exist, they are not set by the control-imaginaries of rangeland ecologists, water point engineers or livestock veterinarians, but in practice by those following such itineraries—where “practice” here is more consonant with, say, pastoralist participatory gaming and simulation exercises than with credentialed experts isolating their grid maps and log frames.

If I had to summarize the above into one take-home message, it is: The most important part of the expression, “sustainable livelihoods,” is that last “s.”

[1] Geoffrey Grigson, the poet, writes: “I am a walker. I like the walker’s broken and restored rhythm. I take paths, including animal ones, which permit latitude, are uneven, allow pauses. My walk varies the variation of what is given.”

Long-terms, short-terms, and short-termism

–So much of what we hear and read about major policy issues sounds like short-termism. Why aren’t more people taking the long-term seriously? Where’s the widespread recognition that temporizing doesn’t address—nor can any short-termism address—the fundamental issues driving these crises, be it global climate change (GCC), global financial meltdowns, pandemics, income inequality, others?

A starting typology. Consider a 2 x 2 typology. One dimension is your ready-made about short-termism versus long-termism.

The other dimension captures two distinct features of the long-term that characterize crises: the long-term conceptualized as the concatenation of discrete or punctuated short-terms versus the long-term as continuously unfolding trends and interactions segmented into (short-term) periods.

Rather than label each, representative statements for each cell are summarized:

Key typology implications for long-terms and short-terms. Foremost, what differentiates “time” is not the “long term versus short term.” It’s better to say some crises are pegged to more than one “short/long” cell.

For example, take the healthcare crisis in Cell 2 (basically, “Increases in healthcare costs are unsustainable”) and now rescript it through the other three cells: “The problem in healthcare is that we can’t predict the kind of innovations necessary for adapting to the multiple futures we face here and today” (Cell 1); “Healthcare continues to be characterized by just a series of hardware and digital upheavals, one after another” (Cell 3); and “It’s not if the next pandemic of an unknown virus will happen but only when” (Cell 4).

Time is always more complicated in crisis scenarios than long-term v. short-term. In the topology, time varies for two primary reasons: because, as we just saw, the same crisis can reflect multiple short/long preoccupations and because the matching of crises to different cells is a major way the passage of time emerges and is tracked.

Different timelines, in effect, track different trajectories of not-knowing, inexperience and difficulty. The latter look very different if, e.g., that “second surge” of COVID-19 is conceptualized in Cell 2 or in Cell 3 terms. In Cell 2, we have learned something by the time the second surge takes off; in Cell 3, so much may have worsened as a result of COVID-19 and so changed are the initial conditions that any “second surge” may well imply starting to learn all over again.

Even then, a Cell 3 focus—one US secretary of state called foreign policy one damn thing after another—has the great virtue (and I believe it is a virtue) of reminding people that there are cases where predicting the future is difficult precisely for the same reasons learning from the past is: Both require stability in objectives, institutional memory, multiple reserves in case something goes wrong, and low environmental uncertainty, among other factors.

–To some the absence of these preconditions is negative; to me, even where negative, these highlight the many other policy areas where the preconditions do exist to varying degrees of not-knowing, inexperience and difficulty.

For example, some regional climate change modeling is of such a high resolution today that model results can be and are in some cases disaggregated in ways that cross several of the above cells. It’s now possible to project estimates for rising sea-levels, storm surges and inland flooding in, say, 20-year increments to better reflect already existing near- and longer-term cycles for region-based infrastructure equipment/plant depreciation and longer term investments, among others. These cycles may overlap but by no means necessarily coincide.

Does this mean uncertainty about climate change impacts is reduced in this case? Of course not. The point is that model disaggregation increases the confidence of decision-making by helping to triangulate (no guarantees here!) on what has already been planned using fewer factors and methods.

The typology’s very major implication for short-termism. Another feature of the typology is subtler but more important for rethinking any complaint about short-termism in crisis response: In all cells, the future is a hypothesis we have yet to finish with.

The Cell 1 hypothesis is that the crisis comes about because we know we can’t predict the future; Cell 4 hypothesizes its crisis is due precisely because we know and can predict the future. Cell 2 conjectures that because we know the present we know the future will differ significantly; while Cell 3 puts forward that we already know that present and future will be alike and importantly so.

But why, “hypothesis”?

A core issue moves to the fore when we focus on the nature of the present in any of the four cells. Where, specifically, in discussing the long and short terms does “not knowing the present” come into play? (Go back to the typology’s four cells and ask of each: “What are the implications of not knowing the present stipulated in this cell?”).

–Whatever your answer, one implication is very clear: Not knowing the short-term we’re in scarcely constitutes short-termism. In fact, conventional short-termism—the present matters more than the future, period—requires more certainty and confidence than warrantable by the respective hypotheses. Our challenge remains “to foresee the present,” as Turgot, the 18th century French economist and statesman, put it long ago. Trying to foresee the present—to track how control room decisions play out over time–is not short-termism.

Major implication for theory and practice. One example of the major implications for theory and practice at the same time will have to suffice. Any number of theories predict very major disasters are headed our way, which haven’t yet appeared because it hasn’t been long enough for them to do so.

Nuclear power plant explosions will eventually be normal occurrences because their irremediable technological tight coupling and complex interactivity makes disaster unavoidable. We haven’t seen enough of them to realize that. Global financial meltdowns and market contagions lead to herd behavior and mob violence, eventually bringing the system down. That there haven’t been more of this shouldn’t dissuade us from seeing what’s coming.

–Long enough, you tell us, and all of this will be abundantly clear—UNLESS, you underscore, we act now to prevent their occurrence.

Well, ok; sure.

But, humor me: Just what cells are you talking about?

Are you saying that nuclear plant explosions and market meltdowns are inevitable because of the long-run trends (Cell 2)? Or are you saying that since these disasters could happen in any of the four cells, they are “more likely” to happen? (No need, you say, for anything like a χ2 test for our 2 x 2 to know that!) Or are you saying something else with respect to your preferred cells and how to parcelize them?

“Actually. WHAT are you predicting?” In short: While many recommend the need for a long-term perspective, the above suggests that their “need” is more problematic than obvious.

Principal source: This is a much augmented, revised section from my “Licking the sharp edge of the sword,” Journal of Contingencies and Crisis Management (2018) 27(1): 1–7.

Betterment (continued)


Henry David Thoreau put it in his Journals, “I do not know that knowledge amounts to anything more definite than a novel and grand surprise, or a sudden revelation of the insufficiency of all that we called knowledge before. . .”

Assume this is so and go on to ask, How can a politics, policy and management infused through and through with not-knowing, difficulty and inexperience be effective?

Effectiveness means those occasions for rethinking (recasting, redescribing, recalibrating, redefining) categories of politics, policy and management lived and worked by, including “regulation,” “failed states,” “politics,” “economic growth and progress,” and “betterment” itself. This happens when you (plural) realize how much depends on advancing to the decision point of “Yes but” and “Yes and.” As betterment pushes complex truths further, I call that good-enough.


Earlier definitions of betterment figured in versions of the 18th century European Enlightenment. The term was used interchangeably with “improvement” or “progress,” though from time to time singled out as its own unit of analysis (most famously in economist Adam Smith’s “the great purpose of human life which we call bettering our condition”).

The variety of Enlightenment thinkers, however, made it inevitable that not-knowing, difficulty and inexperience would be touched upon specifically. Voltaire discusses not-knowing in the entry “On the Limits of the Human Mind” of his Philosophical Dictionary; David Hume, Scottish Enlightenment philosopher, grappled with the acknowledged idea of “not-knowing as the key to the contented life,” according to one commentator; in the view of another, Adam Smith expressed “an open skepticism about the possibility of knowing definitively what it is we are really doing;” while Immanuel Kant notably wrote about “the unknowability of things-in-themselves.” “Full recognition of the importance of uncertainty and the unknowable in analysing economic processes is an eighteenth-century heritage. . .which cannot be emphasized too often. . .” writes a third observer.

As for difficulty, historian Jonathan Israel sketches its central role in the Radical Enlightenment: “Theories of progress, however, contrary to what many have assumed, were usually tempered by a strong streak of pessimism, a sense of the dangers and challenges to which the human condition is subject. The notion, still widespread today, that Enlightenment thinkers nurtured a naive belief in man’s perfectibility seems to be a complete myth conjured up by early twentieth-century scholars unsympathetic to its claims. In reality, Enlightenment progress breathed a vivid awareness of the great difficulty of spreading toleration, curbing religious fanaticism, and otherwise ameliorating human organization, orderliness, and the general state of health was always impressively empirically based.”

Nor was the role of inexperience remote to versions of the Enlightenment: “In the light of the triumph of Newtonian science, the men of the Enlightenment argued that experience and experiment, not a priori reason, were the keys to true knowledge,” writes historian, Roy Porter, where inexperience ironically became a touchstone for criticizing French Enlighteners: “Above all, critics complained, in politics the philosophes lacked the quality they pretended to value most: experience.” Yet, the almost universal priority given to education by Enlightenment advocates across a wide spectrum reflected their acknowledgement that more education meant, acutely, more experience.


These earlier nods toward not-knowing, difficulty and inexperience take us to today’s “and-yet” betterment of yes-but and yes-and. For their growing centrality has brought useful complications to Yes versus No when it comes to a good-versus-bad politics and policymaking.

The ethnographer and writer, Michel Leiris, writes about the need “to merge the yes and the no.” “Between yes and no” is the title of an early essay by Camus. Nietzsche “said no to life as long as it was easy, but yes when it took on the form of the impossible”. The work of Elizabeth Bishop was “perhaps more a quiet no than a great big yes,” according to another poet. More severe, “Herman Melville praised Nathaniel Hawthorne: ‘He says NO! in thunder; but the Devil himself cannot make him say yes. For all men who say yes, lie’”, records the critic, Christopher Ricks, who then asks: “But what about saying, ‘Yes, but…?’”

Ricks is spot-on. In the same way as dark energy and dark matter are said to make up the vast portion of the universe, politics, policy and management are grasped only because of—not in spite of—the not-knowing, difficulty and inexperience, all around and in between.


This betterment is not possible unless you (plural again) recognize how exaggerated many stopping points are in decisionmaking. For example: To govern is to choose. But choose between an irresistible-Yes and an unmovable-No? Better to say, as many have before: No one governs innocently.

Kant’s Enlightenment exhortation—Dare to know! (Sapere aude!)—is taken from the Roman poet, Horace: “Dimidium facti qui coepi, habet: sapere aude: Incipe ( “To have begun is to be half done; dare to know; start!”). Some highlight not only the dare-to-know, but the charge in the word, “Incipe,” as in: “Get started now!” My eye, though, gravitates to that first clause,“To have begun is to be half done”. Far too much of our truth-telling stops short of the indispensable push ahead.


And what is that indispensable push ahead? It is from stopping short at progress and economic growth and pushing further to that good-enough betterment.

If you will, we have over-invested in economic growth on the premise that the knowledge acquired in this way takes us further than we could otherwise go. But good-enough betterment discussed here goes further than economic growth. Instead of insisting practical good enough falls short of true progress, it would be truer to say that progress and economic growth are overdeveloped versions of good-enough betterment.

The 19th century French poet, Lamartine, wrote “Utopias are often just premature truths”—which is precisely the shortcoming I have in mind: Stopping short at progress is premature. How then is betterment better? More than anything else, it has the maturity of a yes-but and yes-and grounded in complexity.