Recasting seven concepts and assumptions in contemporary policy analysis and management

1. Opportunity costs

The cost of the forgone alternative is core to graduate training in policy analysis. But there are huge problems with this concept. Start with a 1977 conversation between Nicholas Kaldor, the Cambridge economist, and his Colombian counterpart, Diego Pizano.

Kaldor asserts: “There is never a Pareto-optimal allocation of resources. There can never be one because the world is in a state of disequilibrium; new technologies keep appearing and it is not sensible to assume a timeless steady-state” (Pizano 2009, 51). Pizano counters by saying the concept of opportunity costs still made sense, even when market conditions are dynamic and unstable. But Kaldor insists,

Well, I would accept that there are some legitimate uses of the concept of opportunity cost and it is natural that in my battle against [General Equilibrium Systems] I have concentrated on the illegitimate ones. Economics can only be seen as a medium for the “allocation of scarce means between alternative uses” in the consideration of short run problems where the framework of social organization and the distribution of available resources can be treated as given as heritage of the past, and current decisions on future developments have no impact whatsoever. (Ibid, 52)

Consider the scorpion’s sting in the last clause. Even if one admitted uncertainty into the present as a function of the past, a dollar spent now on this option in light of that current alternative could still have no impact on the allocation of resources for a future that is ahead of us.

Why? Because markets generate resources and options, not just allocate pre-existing resources over pre-existing alternatives. “Economic theory went astray,” Kaldor added, “when theoreticians focused their attention on the allocative functions of markets to the exclusive of their creative functions, which are far more important since they serve as a instrument for transmitting economic changes” (Ibid, 52).

Source
Pizano, D. 2009. Conversations with Great Economists. New York: Jorge Pinto.


2. “If implemented as planned. . .”

How many times have we heard something like, “If implemented as planned…,” “If the right structures are in place…,” or “Given market-clearing prices…”? Just like that older version: “Monarchy is the best form of government, provided the monarch possesses virtue and wisdom.”

‘If implemented as planned’, when we know that is the assumption we cannot make. ‘If the right structures are in place.” when we know that “right” is unethical without specifying just what the structures are, often case by case. “Given market-clearing prices,” when we know not only that markets in the real world often do not clear (supply and demand do not equate at a single price) – and even when they do, their “efficiencies” can undermine the very markets that produce those prices.

Admit it: We could as well believe that the surest way to heat the house in winter is by striking a match under the thermometer outside.

So, what to do? What follows in points 3 – 7 is what to assume instead.


3. The questions that matter

Someone asserts that this policy or approach holds broadly, and that triggers your asking:

  • Under what conditions?
  • With respect to what?
  • As opposed to what?
  • What is this a case of?
  • What are you–and we–missing?

Under what conditions does what you’re saying actually hold? Risk or uncertainty with respect to what failure scenario? Settler colonialism as opposed to what? Just what is this you are talking about a case of? What are you and I missing that’s right in front of us?


4. Economics and high reliability

Economics is as important to contemporary policy analysis as are reliability and safety to contemporary public management. At their limits, not only are they in conflict, they are categorically different (i.e., they are not in a so-called trade-off).

Economics assumes substitutability, where goods and services have alternatives in the marketplace; infrastructure high reliability (which includes safety) assumes practices for ensuring nonfungibility, where nothing can substitute for the high reliability of critical infrastructures without which there would be no markets for goods and services, right now when selecting among those alternative goods and services. There is a point at which high reliability and trade-offs are immiscible, like trying to mix oil and water.

One way of thinking about the nonfungibility of infrastructure high reliability is that it’s irrecuperable economically in real time. The safe and continuous provision of a critical service, even during (especially during) turbulent times, cannot be cashed out in dollars and cents and be paid to you instead of the service.

Which is to say, if you were to enter the market and arbitrage a price for high reliability of critical infrastructures, the markets transactions would be such you’d never be sure you’re getting what you thought you were buying.


5. Differences in assumptions that matter

When I and others call for better recognition and accommodation of complexity, we mean the complex as well as the uncertain, unfinished and conflicted must be contextualized if we are to analyze and to manage case-by-granular case.

When I and others say we need more findings that can be replicated across a range of cases, we are calling for identification not only of emerging better practices across cases and modifiable in light of new cases, but also of greater equifinality: finding multiple but different pathways to achieve similar objectives, given case diversity.

What I and others mean by calling for greater collaboration is not more teamwork or working with more and different stakeholders, but that they “bring the system into the room” for purposes of making the services in question reliable and safe.

When I and others call for more system integration, we mean the need to recouple the decoupled real-time activities in ways that better mimic, but can never reproduce, the coupled nature of the wider system environment.

When I and others call for more flexibility, we mean the need for greater maneuverability across different performance modes in the face of changing system volatility and options to respond to those changes.

When we need more experimentation, we do not mean a trial-and-error learning where the next systemwide error ends up being the last systemwide trial destroying survival.

Where others talk about risks in a system’s hazardous components, we point to different systemwide reliability standards and only then, to the different risks and uncertainties that follow from the different standards.


6. Prediction isn’t what you think it is.

We are so used to the idea that predicting the future is more or less about accuracy that we forget how murky and unclear the present is. To paraphrase Turgot, the French Enlightenment philosopher and statesman, we have enough trouble predicting the present, let alone the future. Indeed, the future is not something up ahead or later on, but better understood as present prospection. As in: trying to predict the future is the current mess we’re in.

One implication is that to predict the future is to insist that the present messes can be managed differently. The notion that what will save us ahead has yet to be invented misses the more policy-relevant point that pulling out a good mess or forestalling a bad mess or taking on different messes today is also a way to change tomorrow. Think of it this way: The only place the future is more or less reliable is now, and only if we are managing our messes, now.

So what? Such is why a risk estimate must never be confused with being a prediction, i.e., if the risk is left unattended, failure is a matter of time. But is your failure scenario detailed enough to identify and detail conditions for cause and effect upon which prediction is founded? Without such a scenario, you cannot assume more uncertainty means more risk; it may mean only more uncertainty over your estimate of risk.


7. Social trust and distrust

Almost all discussions of really-existing policy and management are colored by considerations of societal trust and distrust.

But trust is a good example of how a social value is specified and differentiated by and through infrastructures. Broader discussions about “trust requires shared values” miss the fact that team situation awareness of systemwide reliability operators is much more about knowledge management, distributed cognition, and keeping a shared bubble of system understanding than it is about “trust” as a singularly important social value.

For that matter, distrust is as core as trust. One reason operators are reliable is that they actively distrust the future will be stable or reliable in the absence of the system’s vigilant real-time management. There has been much less discussion of the positive function of distrust as a social value. In contrast, “distrust” often takes the adjective, “polarizing.”

So too for the related “dread.” Widespread social dread–as in the societal dread that drives the reliability management of very hazardous infrastructures–is almost always taken to be negative. Here too, though, dread has a positive function.

Every day, nuclear plant explosions, airline crashes, financial meltdowns, massive water-supply collapse—and more—are avoided that would have happened had not operators and managers in these large systems prevented their occurrence.

Why? Because societal dread is so intense that these events must be precluded from happening on an active basis. (It might be better to say that we don’t know “societal dread” unless we observe how knowledgeable professionals operate and manage complex critical infrastructures.)

There is such fear of what would happen if large interconnected electricity, telecommunications, water, transportation, financial services and like did fail that it is better to manage them than not have them. We’ve structured our lives to depend on these systems, at least for right now.

All of us of course must wonder at the perversity of this. But that is the function of this dread, and distrust for that matter, isn’t it? Namely: to push us further in probing what it means to privilege social and individual reliability and safety over other values and desires. We are meant to ask: What would it look like in world where such reliability and safety are not so privileged?

For the answer to that question is altogether too evident: Most of the planet already lives in that world of unreliability and little safety. We’re meant to ask, precisely because the answer is that clear.


2 thoughts on “Recasting seven concepts and assumptions in contemporary policy analysis and management

Leave a comment