More on innovation in unstudied conditions
What good is trial and error learning when a system’s massive error means no trials possible afterwards? You do not want to push an infrastructure’s control operators into prolonged unstudied conditions and then wonder why they aren’t reliable.
Some think otherwise. “First off,” the project designer tells us, “I’m always working in unstudied conditions. Every major project, I’ve got to make all manner of assumptions.” I counter: The challenge of project designers is to find out what are the better practices for starting off complex project designs. Here I mean the really-existing practices that have emerged and been modified over a run of different cases and shown to be more effective for design implementation.
“Yet how can a field or discipline grow if it doesn’t move into unstudied conditions by doing something the first time…” This is often stated as established fact. But here too better practices are to be first searched for. Indeed where they aren’t found, then, yes, systemwide innovation should not be undertaken if it reduces options, increases task environment volatility, and diminishes maneuverability across real-time complex system operations.
“But, there always has to be someone who does something for the very first first-time!” At this point, Dutch bluntness is called for: “The burden of proof is on you to demonstrate this is indeed the very first time.” This is a planet of 7 plus billion, after all.
“But still,” the economists press: “What about the critical role of innovation in the economy!” Well, yes, but so too are the infrastructures critical upon which the innovation economy depend. To treat innovation as more important than the infrastructures (without whose reliability there wouldn’t be most innovations) risks Mercator’s projection: It distorts by over-enlarging the already large.
Innovationists don’t see it that way. The risks they take end up the price few of the rest of us ever thought we’d have to pay.
More on constructed reality that constructs
Say you are on one of the upper floors of a skyscraper, looking out onto the fall morning. That is Reality I: You are the observing subject looking out at reality. After a point, you realize that spot in the distance is actually a plane headed toward you in the World Trade Center. That is Reality II: You become the object of reality, in that grip of the real, and no longer just observer.
There is, however, Reality III. This is of the air traffic controllers during 9/11. Neither the observer of the first reality nor the object of second, these professionals achieved the unprecedented without incident that day. They were instructed to land all commercial and general aviation aircraft in the United States—some 4,500 aircraft—and did so.
Without overdrawing the point, so too do we demand seeing that professionals land those water, electricity, transportation, telecommunications, and many more critical services every day without major incident.
More on apocalypticism
“Actually-existing capitalism is a catastrophe”?
Catastrophism to be about anything has to be about the end, as in: It ends in fire, our institutions explode and burn—or in ice, our institutions seize up, implode inward and entirely. Always-late capitalism, on the other hand, is about ensuring that things going its way do not end any time soon (i.e., ensuring that in the long run there’s just another short run). You’d be right in saying the engine of always-late capitalism is to generate seriatim uncertainties on which and from which to speculate and make money.
(The irony is extreme, however: “[T]he revolution does not know the secret of the future, but proceeds in the same manner as capitalism, exploiting every opening that presents itself”—Georges Sorel, French political philosopher.)
More on market failures
Economists tell us there are four principal types of market failure: public goods, externalities, asymmetric information, and market power. They do not talk about the fifth type, the one where efficient markets actually cause market failure by destroying the infrastructure underlying and stabilizing markets and their allocative activities.
–Consider here the 2010 flash crash of the U.S. stock market. Subsequent investigations found that market transactions happened so quickly and were so numerous under conditions of high-frequency trading and collocated servers that a point came when no liquidity was left to meet proffered transactions. Liquidity dried up and with it, price discovery. ‘‘Liquidity in a high-speed world is not a given: market design and market structure must ensure that liquidity provision arises continuously in a highly fragmented, highly interconnected trading environment,’’ as a report by the Commodity Futures Trading Commission (CFTC) put it for the crash. Here, efficiencies realized through high transaction speeds worked against a market infrastructure that would have operated reliably otherwise.
The economist counters by asserting, ‘‘Obviously the market was not efficient because the full costs of reliability were not internalized.’’ But my point remains: Market failure under standard normal conditions of efficiency say nothing about anything so fundamental as infrastructure reliability as foundational to economic efficiency.
–The research challenge is to identify under what conditions does the fifth market failure arise empirically. Until that is done, the better part of wisdom—the better part of government regulation—would be to assume fully efficient markets are low-performance markets when the stabilizing market infrastructure underlying them is prone to this type of market failure. Put positively, highly reliable markets are productive and sufficiently efficient when the underlying market infrastructure is not prone to the destabilizing fifth type of market failure.
But what, then, is “prone”? Low-performing market infrastructure results from the vigorous pursuit of self-interest and efficiencies that hobble real-time market infrastructure operators in choosing strategies that ensure longer-term high reliability of the market infrastructure. (Consider the notion of “competitiveness,” which is nothing more than the red-meat economism of cost slashing, whatever the effects on labor productivity or economic efficiency.)
–There is another way to put the point: High reliability management of critical infrastructures does not mean those infrastructures are to run at 100% full capacity. Quite the reverse. High reliability requires the respective infrastructures not work full throttle: Positive redundancy or fallback assets and options—what the economists’ mis-identified “excess capacity”—are needed in case of sudden loss of running assets and facilities, the loss of which would threaten infrastructure-wide reliability and, with it, price discovery. To accept that “every system is stretched to operate at its capacity” may well be the worst threat to an infrastructure and its economic contributions.
In this view, critical infrastructures are economically most reliably productive when full capacity is not the long-term operating goal. Where so, efficiency no longer serves as a benchmark for economic performance. Rather, we must expect the gap between actual capacity and full capacity in the economy to be greater under a high reliability standard, where the follow-on impacts for the allocation and distribution of services are investments in having a long term. In particular, people typically think of real-time economic stability in comparison to the past.
How stable before is the retrospective view. Prospectively, however, that economy is only as reliable as the next downturn ahead. Economic growth has prospective reliability to the extent critical infrastructures and their link to productivity are the driver. This means the relationship between the economic short run and long run changes with the development of infrastructure and mandates for their reliability.
More on the neoliberal status quo
We hear much about those stopped up short by “the unimaginability of any alternative to the neoliberal status quo.” (At the risk of anachronism, here’s Theodor Adorno, the left theorist and critic, in 1956: “The horror is that for the first time we live in a world in which we can no longer imagine a better one.”) Surely, though, that’s a glove pulled inside-out. Isn’t it better to say neoliberalism generates so much contingency and uncertainty that it undermines a conventionally understood “status quo’?
There is no place that fixes (both senses of the word) our understanding or unease. It’s the status quo as full-stop-stable that is unimaginable. Or to put the same point from a direction some insist is imaginable: “A crisis is defined as ‘stable’ if neither side has or perceives an incentive to use weapons of nuclear weapons first out of the fear that the other side is about to do so.”
That is, more of us are in the margins than we know, and that is where we best belong. As if we could ever rely on policymakers and public managers to know where the status quo is!
More on traffic congestion
The minute you take the significance of the car to be something other than the source of traffic congestion; the minute you see how luck matters in that congestion;. . .
the minute you know it’s a miracle there aren’t more agencies and groups fighting over stewardship rights in better addressing traffic congestion;. . .
the minute you see that no one is going to compensate you for being stuck in traffic, that life is sometimes unfair because other parts of it aren’t, and that situations, like congestion, can be improved, though not for very long; . . .
the minute you see that the traffic jam is the herd behavior of a people intent on imitating others; the minute you see that those strategies and arguments in favor of reducing congestion (privatization of public infrastructure, congestion tolls, full-cost pricing of cars) lead to pressures to increase wealth and thus economic growth and along with it having more cars;. . .
. . .that is the minute you start to rethink traffic congestion.