The irreducible particularity of beings

“I have never asked that all trees have one bark” Gotthold Ephraim Lessing, Nathan the Wise

I

Each angel is its own species, Thomas Aquinas tells us. “Why mightn’t there be, somehow, a new science for every object?,” asks Roland Barthes. In those I see the rightness and certainty of context I also find in the lines of A.R. Ammons:

though I
have not been here long, I can
look up at the sky at night and tell
how things are likely to go for
the next hundred million years:
the universe will probably not find
a way to vanish nor I
in all that time reappear.

Why not each its own science and species, having-been for the rest of eternity?

II

Is our sense of incompleteness the felt part of an irreducible particularity of each being, that sense we never body forth as representative or total? This sense has been professed by very different world systems and believers:

“The [French] Constitution of 1795, like its predecessors, was made for man. But there is no such thing as man in the world. In my lifetime I have seen Frenchmen, Italians, Russians, etc.; thanks to Montesquieu, I even know that one can be Persian. But as for man, I declare that I have never in my life met him; if he exists he is unknown to me,” declared conservative critic, Joseph de Maistre.

Or consider the more recent lines of poet, Fernando Pessoa,

They spoke to me of people, and of humanity.
But I’ve never seen people, or humanity.
I’ve seen various people, astonishingly dissimilar,
Each separated from the next by an unpeopled space

Major Read: The siloing of approaches to discourse and narrative analyses in public policy

Imagine you are a new graduate student assigned to undertake a comprehensive survey of the many narrative analytical approaches to public policy and management. All of this is fresh for you and, naturally, you look first for current literature reviews of primary approaches. What would be among your first findings?

By way of answer, a major review of 25 years of discourse analyses (DAs) around policymaking (Leipold, Feindt, Winkel & Keller 2019, p. 448) notes:

Although its name resembles Roe’s (1994) Narrative Policy Analysis, [the Narrative Policy Framework, NPF] is not part of the landscape of discourse analysis covered in this special issue. NPF scholars clearly distance their work from post-structural ontologies and methods and position NPF as a positivist alternative to the study of policy discourses and narratives. Their aim is to provide an analytical basis for an ‘empirical’ access to socio-cultural meaning structures (i.e. generated through the analysis of large sets of text data) that stands alongside established ‘positivist’ frameworks for the analysis of the policy process (for an overview of these approaches see Weible & Sabatier, 2017). . . .

with the review authors adding that “discourse analytical approaches were ‘ignored’ in political science handbooks ‘largely on grounds that they don’t follow scientific norms’ of ‘clarity, hypothesis-testing, acknowledgement of uncertainty, etc.’ (Sabatier & Weible, 2014, p. 11).”  As the passage more than hints, the same exclusionary focus of discourse analysis to NPF is found in the NPF literature with respect to DAs.

Take, by way of example, a major review of decades’ work on NPF (Schlaufer, Kuenzler, Jones & Shanahan 2022). Both the latter review and the Leipold et al (2019) review call for more interdisciplinary, cross-context research. But our graduate student quickly notes that the  NPF review does not pursue the facts:

  • That discourse analysis has also analyzed policy narratives in the same areas as NPF, notably the environment;
  • That policy processes, a core focus of NPF, have also been a core focus of discourse analysis: “How does discourse analysis contribute to our understanding of environmental policy processes. . .?” (Leipold et al 20l9, p. 446);
  • That a central place given in NPF to heroes, villains, victims and plot in policy narratives has also been a centerpiece of other narrative approaches, notably the narrative-networks model (NNM) of political scientist Helen Ingram and her colleagues, Raul Lejano and Mrill Ingram (e.g., Ingram, Ingram & Lejano 2019); and 
  • That mixed methods and interviews, found to be prominent in the NPF review, are equally prominent in other narrative analytical approaches. To take one from many examples, Q-methodology is used in the NPA work of van Eeten (2006), discourse analysis work of Stevenson (2019), and the NPF work of Brumbaugh & Rupp (2020). Q-methodology and interpretative policy analysis (IPA) have had a long association as well (e.g., Lundberg 2020).

Such studied disregard, it must be stressed, has been mutual and reciprocated. Where the discourse analysis review mentions the Weible and Sabatier framework of processes, it lists by name only allied discourse approaches; the NFP review lists by name only the Weible and Sabatier processes, seeing no need to recognize others, however well-established. 

So what?

The reader might now expect me to argue and make the case for cross-fertilization, or at least better interchange, between and among the narrative analytical approaches. The aim here is more modest. An important step has to be taken before pinpointing areas of mutual benefit. That is, attempts at cross-fertilization are premature where the methodological reasons that render narrative analytic approaches different are not first better identified and appreciated. 

These differences go well beyond conventional binaries of positivist versus post-positivist, quantitative versus qualitative, holistic versus reductionistic, incommensurable versus commensurable, and scientific versus interpretivist. Even where the terms have been used by founders and acolytes of the different narrative analytical approaches, the binaries have been complicated from the very get-go.

Qualitative versus qualitative—and yet the first example of formal quantitative network analysis among the approaches discussed here is the irrigation case study of Narrative Policy Analysis (1994). Positivist versus post-positivist—and yet you can imagine the perplexity of those in other narrative analytical traditions, grounded as they are in case analysis, to the statement, “The NPF brought empiricism to the study of narratives” (Schlaufer et al 2022, p. 261). Scientific versus interpretivist—and yet major semioticians (notably, A.J. Greimas [1987, pp. xl – xli, 121ff]) would have treated as risible any intimation that theirs wasn’t a rigorous, scientific project.

The argument is that, in light of these and like complications, there needs to be greater clarity about three under-acknowledged methodological issues prior to comparing these narrative analytical approaches and drawing lessons from and for each other. Three deserve special mention.

Different approaches ask different questions.

Each of the respective narrative approaches proffers different interrogations of policy narratives. This would be banal, were it not for the logical and empirical implications.

Perhaps the best example is NPA when compared to both NPF and DAs: the former asks how narratives underwrite and stabilize the assumptions of decision making under uncertain and complex circumstances; the latter two approaches make no such claim. For them, as for other approaches, narratives are stories with various uses, including—and certainly not excluding—that of persuading others of what is already taken to be known or true. 

Being clearer about key-question differences would aid in identifying areas of useful cross-approach interchange. For example, the NPF review notes: “However, not much is known about how these narratives affect the policy process or policy outputs. That is, where policy narratives originate, whom they impact, and to what effect are all important policy narrative questions that are rarely addressed simultaneously” (Schlaufer et al 2022, p. 259). That latter point, nevertheless, is a settled matter from the perspective of some narrative approaches. It’s precisely how narratives originate, whom they impact, and what are their other effects that are demonstrated in the environmental governance focus of NNM. 

Also, NPA was extended to demonstrate how infrastructure operators and emergency managers underwrite and stabilize real-time decision making under turbulent conditions. Systemwide failure scenarios—a type of policy-relevant narrative—affect real-time decisions, even if (and especially because) some contingencies cannot be narrativized at that point in decision making. More, the well-documented real-time use of failure scenarios complicates any kind of micro-meso-macro distinction (a key feature of the NPF, for example). Stabilization, even in normal operations in large critical infrastructures, takes place frequently in the form of team situation awareness of systemwide operations interconnected with the operations of other critical infrastructures (Roe & Schulman 2008, 2016). 

“Meso-levels” may become clear only later on, though that too can’t be assumed beforehand. Nor is a persisting lack of clarity over levels-of-analysis unfamiliar in the social sciences. “Interdependence” has long been notorious for the absence of agreed-upon empirical measures (see La Porte 1975/2015).

Methodological advances in the approaches are not cabined to scholarly confines

Reviews of the narrative analytical approaches are not as comprehensive as they might first appear. This happens especially when the gray literatures of practicing policy analysts, public managers and scholar-activists are not canvassed. The reasons are various, including but not limited to the case material not entailing a focus, e.g., on discourse coalitions as in many DAs (Hajer 1993), the more-than-human in the narrative networks of NNM (Ingram et al 2019), the four-step method of an NPA (Roe 1994), or the specific assumptions of NPF (Schlaufer et al 2022).

Consider a timely example. A recent working paper, Understanding the role of narratives in humanitarian policy change and published by the London-based Overseas Development Institute (Saez & Bryant 2023), is explicitly produced for a non-scholarly audience and states it is grounded in NPF. Yet it is not clear how the working paper’s highly original and useful Figure 1 (Ibid, p. 16) derives from the NPF (note too the wider appeal to “discourses” at the figure’s center):

Arguably more important, Figure 1 highlights the fact that policy narratives can be geared around delaying rather than supporting this or that policy and management (for other examples, see Hermann 2024). In other words, different narrative analytical approaches must be assumed to have different granularities, irrespective of (in spite of) any prejudgments about “levels of analysis.” 

The differences in granularities open up all manner of possibilities for recasting (think: renarrativizing) so-called wicked policy problems (Roe 2023). For example, while DAs from the get-go focused on “the policy process,” their more recent developments differentiate by way of specifics between and among “maneuvering within a given discursive framework, navigating between different and conflicting discourses, or transforming existing discourses” (Lynggaard & Triantafillou 2023, p. 1).

The methodological challenge of evaluating different types of policy narratives. 

Although ubiquitous, the term, “policy narratives,” is not helpful unless the narratives in question are first differentiated. We just saw the methodological need to distinguish and take note of the more granular uses of failure scenarios in large socio-technical systems. It is quite another matter to single out policy narratives that are conspiracy theories which various leaders, past and present, believe and act upon in their advocacy and sometimes policy-making.

The basic question here is the perennial one of how to evaluate different types of policy narratives, or more specifically: What are the evaluative criteria, if any, of the different narrative analytic approaches? Suffice to say, there are approaches that rely on the policy processes they study or assume for making any such evaluation. Other approaches have evaluative criteria independent of those processes or profess none at all.

An example helps. A complete policy narrative in NPA meets several criteria, including: the narrative—its story with beginning, middle and end, or argument with premises and conclusions—is one that takes seriously that the policy or management issue is complex, uncertain, and/or conflicted. One chief assumption of conspiracy theories is that nothing happens by accident (Barkun 2013). From the NPA perspective, such narratives are inferior when compared to other narratives that posit accidents, happenstance, chance and luck happening in and through policy processes for the same issues.

This is not an argument for one approach over another—rather, the point again is methodological. In particular, it might be far better to think of evaluative criteria as a major link between policy narratives and the policy processes, including the latter’s technologies and actionable contingencies, in which the narratives are embedded.

Upshot.

A mentor, political scientist Aaron Wildavsky, famously wondered: If planning is everything, maybe it is nothing. So too all of us might want to ask of those policy narratives said to be everywhere and important for almost everything. Are we overthinking narratives (see Brooks 2022)? 

As the start to an answer, a hypothesis is proposed: A less-recognized but major consequence of the siloed approaches to the study of policy narratives has been to slight and confuse the role of very real, very messy policy processes in determining and differentiating types of and criteria for narratives (Roe 2013).

Sources

Barkun, M. (2013). A Culture of Conspiracy: Apocalyptic Visions in Contemporary America. Second Edition, Berkeley, CA: California University Press.

Brooks, P. (2022). Seduced by Story: The Use and Abuse of Narrative. The New York Review of Books: New York, NY.

Brumbaugh, A. and J.H. Rupp (2020). Wabash CarbonSAFE. Subtask 4.1 – Application of Policy Frameworks for Improved Carbon Capture and Storage. Social Site Characterization & Stakeholder Engagement. Report Number: DOE/ FE0031626‐2. U.S. DOE Cooperative Agreement Number: DE-FE0031625, Champaign, IL.

Greimas, A.J. (1987). Qn Meaning Selected Writings in Semiotic Theory. Trans. P. J. Perron and F. H. Collins, Theory and History of Literature, Volume 38, Minneapolis MN: University of Minnesota Press.

Hajer, M. (1993). Discourse coalitions and the institutionalization of practice: The case of acid rain in Britain. In F. Fischer & J. Forester (Eds.), The Argumentative Turn in Policy Analysis and Planning (pp. 43–76). Durham, NC: Duke University Press.

Hermann, M. (2024). ‘Narratives of Delay’: How the Animal Pharma Industry Resists Moves to Curb the Overuse of Antibiotics on Farms. (Accessed online at https://sentientmedia.org/animal-pharma-industry-overuse-of-antibiotics/)

Ingram, M., H. Ingram and R. Lejano (2019). Environmental Action in the Anthropocene. Journal of Environmental Policy and Planning 21(5): 492–503.

La Porte, T. R. (Ed.). (1975/2015). Organized Social Complexity: Challenge to politics and policy. Princeton, NJ: Princeton University Press.

Lejano, R., M. Ingram, and H. Ingram (2013). The Power of Narrative in Environmental Networks. Cambridge, MA: MIT Press. 

Leipold, S. P. Feindt, G. Winkel & R. Keller (2019) Discourse analysis of environmental policy revisited: traditions, trends, perspectives, Journal of Environmental Policy & Planning, 21:5, 445-463

Lundberg, A. (2020). Viewpoints about Educational Language Policies: Multilingualism in Sweden and Switzerland. Malmö Studies in Educational Sciences No. 90, Faculty of Education and Society, Malmö University, Sweden.

Lynggaard, K and P. Triantafillou (2023). Discourse analysis and strategic policy advice: manoeuvring, navigating, and transforming policy. Journal of European Public Policy, DOI: 10.1080/13501763.2023.2217846

Roe, E. (1994). Narrative Policy Analysis: Theory and Practice. Durham, North Carolina: Duke University Press.

——- (2013). Making the Most of Mess: Reliability and policy in today’s management challenges. Durham, NC: Duke University Press.

——- (2023). When Complex is as Simple as it Gets: Guide for Recasting Policy and Management in the Anthropocene. IDS Working Paper 589, Brighton: Institute of Development Studies, DOI: 10.19088/IDS.2023.025

Roe, E. and P.R. Schulman (2008) High Reliability Management, Stanford CA: Stanford University Press.

——- (2016). Reliability and Risk, Stanford CA: Stanford University Press. 

Saez, P. and J. Bryant (2923). Understanding the Role of Narratives in Humanitarian Policy change. HPG working paper. London: ODI (https://odi.org/en/publications/understandingthe-role-of-narratives-in-humanitarian-policy-change).

Schlaufer, C., J. Kuenzler, M.D. Jones, & E. Shanahan (2022). The Narrative Policy Framework: A Traveler’s Guide to Policy Stories. Politsche Vierteljahresschrift 63:249–273.

van Eeten, M.J.G. (2001). Recasting Intractable Policy Issues: The Wider Implications of the Netherlands Civil Aviation Controversy, Journal of Policy Analysis and Management 20(3): 391 – 414. 2001.

Weible, C.M., & Sabatier, P.A. eds. (2017). Theories of the Policy Process (4th ed.). Boulder, CO: Westview.

As for ChatGPT, they know what they’re doing

Apologies in advance for the length, but the quoted extract is highly illuminating!

Recently, I came across the work of a graduate student, whose identity is charitable to keep hidden. The text was written rather well and included a critical review of the experimental studies conducted on one of my research topics: therefore, my name and the name of the colleague who worked with me on that line of research, Marco Marini, often appeared in the text, and consequently also in the final bibliography. The consultation of the references, however, had in store a few surprises. Among other entries, the following were recorded:

–Marini, M. (2013). When it’s better to choose the one you love: The effect of attractiveness biases in consumer choices. Judgment and Decision Making, 8(5), 476-485.

–Marini, M. (2019). How to get people to take risks? A choice-based measure of risk preference. PloS One, 14(1), e0209983. doi: https://doi.org/https://doi. org/10.1371/journal.pone.0209983

–Marini, M. (2019). Luring to a suboptimal option: The effect of payoff reduc- tion in a risky choice framing. Judgment and Decision making, 14(2), 198- 207.

–Marini, M. (2020). The asymmetrically dominated compromise effect in a dynamic setting. Journal of Economic Psychology, 76, 102-257.

–Paglieri, F. (2009). The attractiveness of decoys in economic contexts: An experimental investigation. Judgment and Decision Making, 4(4), 335-342.

Formally, this bibliography extract is flawless: the entries are correctly formatted according to the standards of the American Psychological Association (APA), the relevant information is all present, the articles are consistent with the topic of the student’s assignment, and the titles of the various contributions are, objectively, quite intriguing. The only problem is that… none of these publications exist!

The incident was neither a brave, subversive act of provocation (to demonstrate that university instructors no longer read carefully the written assignments of their students), nor a symptom of terminal stupidity in the student (only a very dumb cheater would try to falsify the references of the very same people tasked with evaluating their work): instead, it was the outcome of a naïve and inappropriate use of generative AI. The student, after writing the assignment themselves and inserting the appropriate references in the text, using the author-date APA standard, had incautiously asked ChatGPT to prepare the reference list, giving it their own text as part of the prompt. Unfortunately, the software compiled a bibliographic list in full compliance with APA standards, but without any attention to the truthfulness of the information included therein.

Here, however, we are not interested in the student’s misadventures, but rather in how ChatGPT produced its output, which was certainly not random: there is method to this madness. Firstly, the journals in which the fake contributions would have appeared are plausible, both thematically, and because Marini and I have already published in those venues in the past, or in very similar ones. Secondly, the volume numbers that are mentioned refer to issues that have indeed been released, and usually the numbering and year of publication match; in one case, the entire reference (PloS One, 14(1), e0209983. doi: https://doi.org/10.1371/journal.pone.02099 83) refers to an existing article, except that it is a study on a completely different topic, i.e. gender barriers in research at the South Pole (Nash, M., Nielsen, H., Shaw, J., King, M., Lea, M.-A., & Bax, N (2019), “Antarctica just has this hero factor…”: Gendered barriers to Australian Antarctic research and remote fieldwork).

The inconsistencies that emerge upon closer inspection are also revealing: the 2020 article attributed to Marini is listed as appearing between page 102 and page 257, except that there never was a single 155-page long contribution published in that particular journal, and probably not even in others, at least in the field of economic psychology; delving deeper, one discovers that the Journal of Economic Psychology, from 2020 onwards, no longer reports the page numbers of individual articles, but only their identification number, which is composed of a 6-digit code starting with 102, and the code 102257 (that ChatGPT creatively transformed into page numbers, 102–257) corresponds to the editorial of the issue following the one cited in the invented bibliographic reference.

At other times, the system falls prey to ambiguities of meaning: the decoy effect, which was the main focus of the student’s paper, is also referred to as the attraction effect in the literature, and the word “attraction” evokes the semantic field of affects, which instead has nothing to do with the technical phenomenon in question (i.e., a shift of preferences towards an option that is manifestly superior to another inserted ad hoc, called decoy). It is because of this semantic ambiguity that ChatGPT came up with a title like “When it’s better to choose the one you love: The effect of attractiveness biases in consumer choices” – a wonderful title, by the way, which I will certainly use, as soon as the opportunity presents itself.

In short, this false output is not due to anomalies or errors in the functioning of the software, but on the contrary it illustrates perfectly what ChatGPT is built to do (and does very well): generate linguistic strings (in this case, bibliographic entries) that have the maximum probability of satisfying the user’s request, based on similar instances present in the (huge) database to which the program had access during training. What ChatGPT does not do, and cannot do due to the way it functions (at least for the time being), is consulting the real world or an internal representation of it: the system does not work by checking the state of the world and describing it, but rather by constructing responses that are maximally consistent with the vast mass of linguistic data at its disposal, whose adherence to reality is by no means guaranteed.

https://link.springer.com/article/10.1007/s13347-024-00743-x (footnote deleted for readability)

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

More than a year ago a joint statement was issued by the Center for AI Safety. It was the one sentence quoted above. Famously, it was signed by more than 350 AI experts and public figures.

Now, of course, we cannot dismiss the actual and potential harms of artificial intelligence.

But, just as clearly, these 350 people must be among the last people on Earth you’d turn to for pandemic and nuclear war scenarios of sufficient granularity against which to appraise their AI crisis scenarios.

Just-enough reliability?

The more you aim for just-enough reliability, the more specific and narrow are the criteria of “just how good is just-good-enough” (“you must respond within x minutes of a call. . .”). Goal displacement is risked where, e.g., meeting government regulatory compliance equates to ensuring the infrastructure’s continuous and safe provision of its critical service. In so doing, safe and continuous service reliability eventually falls to the side when system operators feel compelled to be fast enough with just enough, knowing this is never enough all the time.

If we want anything more by way of highly reliable service provision, then that is left to us, not so much as consumers or citizens, but as amateurs who are now be their own reliability managers. So goes the economists’ wet dream of just-enough reliability. Gone are the days when anyone felt comfortable with discussions that include, “Elementary economics demonstrates that. . .”

A pincushion of etc’s

Start with what passes today as a cliche: “Policymakers also need to worry about those other factors—societal, political, economic, historical, cultural, geographical, governmental, psychological, technological, ethical, religious etc—that are so undeniably part of policy analysis and management.”

That pincushion of “etcetera’s” Indicates things are just critical enough to get a nod, but not that critical to actually name. So too Samuel Taylor Coleridge argued “matter” was treated like a pincushion whose surface was hidden by all the sensations, thoughts and properties stabbed into it.

You ask today’s version of, “What are the important factors?,” and you get a pincushion of responses affixed with all manner of “etc’s” “Hail, Muse! Et Cetera,” the poet, Byron, put it in the third canto of Don Juan.

And yet, writes Wittgenstein: “Again and again, my ‘etc’ has a limit.”

A better benchmark for income inequality

World is suddener than we fancy it.

World is crazier and more of it than we think, 
Incorrigibly plural.
                            Louis MacNeice, "Snow"

There are so many different programs, projects, activities and initiatives connected to “income inequality” that the immediate challenge is to compare and contrast them before drawing generalizations about anything like an [Inequality] bracketed off from really-existing variability.

The comparison is not so much at the level of that country’s family support program contrasted to this country’s family support program, when it comes to a capitalized benchmark called [Inequality].

The comparison is more across many family support programs, much along the lines that no single heart is the same as another but these different and other different hearts set the stage for recognizing patterns across really-existing ones. That pattern recognition is of inequality, with a small-i.

Complex is. . .

Jesus Christ having a lot to say, but wise enough not to write it down

Everyone having the right not to be killed by people they don’t know

“A thing is a hole in the thing it is not” (Carl Andre, artist)

Decorum demanding that Medea kill her children offstage and that Macbeth do the same for King Duncan

Each person on earth being allocated a randomly unique number: “This one is yours. It’s irreplaceable.”

Understanding that carbon pricing and cap-and-trade are easy to talk about because they’re hard to implement. How else to buy time to avoid all the other approaches that are quicker by being context specific?

Seeing that in Trump, Boris Johnson, Putin, and Xi Jinping, we are weaponizing a late-version of collapse with its very own celebrity brands


Principal sources available on request.

The difference between reliability professionals and active micro-operators: some livestock and pastoralist examples

I

Reliability professionals are central to translating statements of systemwide policies, laws and regulations into reliable real-time operations within and across the system infrastructures. This means reliability professionals are neither macro-designers located in the infrastructure’s headquarters nor micro-operators at individual facilities. Instead, they operate in between the macro- and the micro-levels, working in a very important middle domain within the infrastructure as a whole.

In this domain of expertise, infrastructure reliability in systemwide operations is achieved only if macro-designs are modified into different scenarios that take into account local conditions affecting infrastructure operations and where the real-time better practices that have evolved across a diversity of really-existing cases of operations are applied so as to ensure achievement of the original reliability mandates of policies, laws and regulation.

II

For example, the land board’s longstanding policy may be that livestock watering boreholes should be spaced 8 kilometers (5 miles) apart in order to reduce the effects of overgrazing. Indeed, land board members and staff may still insist it is their policy, even when your map of actual livestock water boreholes shows conclusively that boreholes are not spaced 8km apart on the ground. Does your map of allocated boreholes mean the 8km rule is not really land board policy?

No, it doesn’t

It is better to say that any such policy has to be modified in practice because variability in site conditions, aquifers, range composition and livestock characteristics differ so much (e.g., the hardveld is not the same as the sandveld). Furthermore, actually-existing practices for siting and spacing livestock borehole evolving across all the land boards and all their sitings, and this more up-to-date knowledge helps them in the placement of new livestock water boreholes (e.g. more knowledge and mapping now exist about the underground aquifers).

In other words, to say this map of livestock watering boreholes shows that the spacing policy was NOT in fact implemented misses the fundamental point that the policy was indeed implemented by land board members and staff in ways that cannot be attributed to their being expedient or corrupt, full stop. Even if the latter were true in some cases, no policy can be reliable if it is one-size-fits-all.

III

The chief implication of the preceding example is that the locus and focus of “implementation” shifts from micro-site—”drilling his borehole right here and right now”—to the middle domain where reliability professionals convert macro-policies into local contingency scenarios—”siting the borehole this side differs for us from siting the borehole that side”—and where better practices that have emerged out of all siting and spacing activities since the policy was adopted are used to modify new placements under the overall 8km policy.

This means that the micro-operators at any individual site—the drilling rig and operator, the borehole owner(s) and their specific herds and herders—are not the only unit and level of analysis for the actual implementation, here and now of the 8km policy. Implementation of borehole siting and spacing also takes place when teams or groups of reliability professionals adapt borehole siting and spacing in light of both locally contingent conditions and newer systemwide practices developing across different conditions relevant for up-to-date, reliable borehole placement.

IV

This also means that active micro-operators and reliability professionals–or at least their roles–need to be distinguished from each other. One or two drilling rig operators may be preferred by livestock owners because of their skills in getting results. But these drilling rig operators are reliability professionals when they also work with land board members/staff in the latter’s effort to identify more reliable scenarios for actual sitings as well as more up-to-date systemwide siting/spacing practices. Here they are in the role of reliability professionals because they have a bigger picture of borehole siting and spacing than when they work as a single driller at a single site with a specific livestock owner.

Or take another example. When the paravet is great one-on-one, developing unique relationships with each of his or her clients, then s/he is a micro-operator. When that same paravet acts according to his or her official job definition–“A para-veterinary worker is a veterinary science expert who, as part of a veterinary aid system, performs procedures autonomously or semi-autonomously”–then that system and team component points to his or her being a reliability professional. (Note these networks can be informal and not just formal ones.)

Last but not least, a case study rich in examples of networks of reliability professionals, involving pastoralists and others, is to be found in: Alex Tasker & Ian Scoones (2022). “High Reliability Knowledge Networks: Responding to Animal Diseases in a Pastoral Area of Northern Kenya,” The Journal of Development Studies 58(5): 968-988.

Rethinking early warnings for drought

Bells were increasingly used not only to summon people to church, but also to provide another prompt for a belief act to those laity who had not attended: the major bells were to be rung during the Mass at the moment of consecration of the Host, and from the late twelfth century onwards we find texts calling upon lay people to kneel and adore where ever they were at that moment…

John Arnold (2023). Believing in belief: Gibbon, Latour and the social history of religion. Past & Present, 260(1): 236–268. (https://doi.org/10.1093/pastj/gtac012)

I

I suggest that early warnings promulgated as part of official drought management systems are designed to be bells in the above sense: People are to demonstrate their belief in the warnings when issued. They are to take action then and there because of them.

But, as Arnold also reminds us, demonstration of obedience always entails the possibility of failure. Heeding the warning might not work.

Indeed, some early warning systems are designed to fail because they are meant also for non-believers. The latter include, most notably for our purposes, those who subscribe to other types of warnings (e.g., https://pastres.org/2023/05/12/local-early-warning-systems-predicting-the-future-when-things-are-so-uncertain/).

This matters because the stakes are high when it comes to drought for both believers and non-believers. How so?

II

It is important to understand the conditions under which the designers themselves don’t believe in their own bell-ringing systems. In their article, “Drought Management Norms: Is the Middle East and North Africa Region Managing Risks or Crises?,” Jedd et al (2021) examine the efficacy official systems in the MENA region. They conclude:

Drought monitoring data were often treated as proprietary information by the producing agencies; interagency sharing, let alone wider publication, was rare. Government officials described the following reasons for this approach. First, it could create pressure on decision-makers to take action (politicizes the issue). Second, intervention measures are costly, and so, taking measures creates strong and competing demands for financial resources from agencies and/or ministers (increase political transaction costs). Therefore, given existing policies and institutions in the countries, it is unclear to what extent drought decision-making processes would be improved or expedited with increased transparency of monitoring information. . . .

This creates a difficult puzzle: In order to mitigate future drought losses, a clear depiction of current conditions must be made publicly available. However, publishing these data may require that agencies take on the burden of allocating relief if the release of this very information coincides with a future drought crisis.

https://journals.sagepub.com/doi/10.1177/1070496520960204

III

So then the obvious policy and management question is: When it comes to the efficacy of early warnings for drought, who do you want to start with: believers or non-believers?