What to do when policy articles keep ending where they should’ve started

In an analysis of 35 recent literature reviews on security implications of climate change, the authors point out:

A frequently voiced recommendation in reviews of the climate–conflict literature concerns a need for increasing methodological diversity and rigor. This research priority has multiple dimensions and, at the core, applies as much to the wider research field as to any individual study, given inherent complexities of combining diverse research methods and epistemologies within a single integrated analytical framework. Common calls include (1) application of mixed-methods research designs, (2) in-depth analysis of influential data points to trace the causal processes at play and to (3) triangulate and validate findings from the quantitative empirical literature, as well as (4) out-of-sample prediction to evaluate the generalizability of particular results and to explore long-term implications of alternative scenarios.

Calls for methodological diversity in the study of complex policy issues are a fixture to such an extent you have to wonder why new research doesn’t begin with that observation and follow through. Instead, the calls are a repeated finding to be dealt with whenever.

–One reason is that no such research could be funded for or undertaken by researchers as singletons. Not only would you need that imaginary of the interdisciplinary team with that long-term commitment, you’d need the funding of large foundations or government agencies that are worrying about other things.

What could be more worrisome, you ask, than complex issues of climate change and conflict? Foundations and government agencies suspect, if not already know, that major research programs routinely identify more questions than answers. “It turns out we’re not even asking the right questions. . .,” so goes the key finding.

So, what’s the upshot? Are there useful things we can do now?


Experience tells us there are at least five upshots right in front of us but often not seen:

1. Don’t forget the big-five prism.

People’s perceptions of a complex policy problem vary by their: age, education, income/class, gender and race/ethnicity. Of course, the categories are socially constructed (e.g., some governments do not gather data by “race”). But they are meaningful precisely because of that. Other factors, like sexual orientation or language are as important, if not more so, for contexts as differentiated as they are.

You can’t assume your audience and even other policy types appreciate the importance of these demographic filters.

2. The status quo is always an alternative, just as are better practices developed elsewhere and modified for the case at hand.

It’s too often said that “because the status quo is untenable, we must find an alternative.” Actually, “maintain the status quo” is among the alternatives to be evaluated.

The status quo is “business as usual,” not the “Do nothing” option. Under the status quo (e.g., the agency continues to do what it is already doing), one option is whether the activities already underway could achieve the ends sought, eventually. This is important because of that other probability–not just possibility–when implementation of new option leads to conditions worse than the status quo.

Also, it would be astonishing in a planet of 7.5+ billion persons that people elsewhere were not thinking about the complex issue in question or had already moved on to practices that deal with it or like issues.

In short, if you are searching for a radical alternative to the status quo, first satisfy yourself that there aren’t status quo’s already radicalized and modifiable for your purposes.

3. Some complex policy problems are complex because not everything is in a trade-off.

Just as with “risk,” “trade-off” has become such a naturalized term of policy-talk that people ask right off, “Well, what are the risks and trade-offs involved?”

But talk of trade-offs is premature for a number of hard issues. Infrastructure high reliability assumes a theory of nonfungibility, where nothing can substitute for the high reliability and safety without which there would be no markets for goods and services, at least for right now for the economically allocative decision. Economics, in contrast, is a theory of substitutability, where goods and services have alternatives in the marketplace.

4. Evaluations of complex policy interventions find mixed results but less frequently identify trade-offs over “what’s enough?”

Because policy analysis has been from its inception an interdisciplinary profession, it is also multi-criteria for the purposes of assessing options before implementation and evaluating results afterwards.

The more criteria that options are evaluated with respect to–efficiency (benefits over cost), cost effectiveness (e.g., largest benefits for a given cost or budget), political feasibility, administrative feasibility, legality, and others (e.g., equity, sustainability. . .)–the more unlikely straightforward success is achieved. The common response has been to reduce the number of criteria or insist some–efficiency and cost-effectiveness, most notably–take priority.

Yet a very different reaction to typically mixed results is to insist that, where trade-offs do exist, they are about having enough of each. More, the second you admit into decisionmaking questions of “what is enough?,” feasibility criteria rapidly focus on: Which alternative, if implemented, can keep decisionmaker options open for unpredictable changes ahead?

5. Nothing is implemented as planned (but often not for reasons you think).

Hardly news, the reasons given for the gap between what’s planned and what’s implemented typically refer to politics, dollars and jerks. Even where so, the statement needs to be pushed further, with conditions being as differentiated as in complex issues.

A fuller explanation for the shortfall is that policy formulation is usually based on cause-and-effect analysis, while implementation is usually undertaken in terms of means-and-ends considerations. The gap to be worried about is not so much between plan and implementation as it is between cause-and-effect thinking and means-and-ends thinking.

People on the ground implementing don’t see themselves as “the effects” of “external causes”. They hold themselves to be actually existing human beings with really existing goals requiring real means to achieve them. This is also why experience with implementation and operations is so important: We can never assume things will get implemented by means and ends if analyzed or predicted in terms of cause and effect.


–A concluding point about that “experience with implementation.” More experience does not mean less inexperience with complexity.

To repeat earlier entries, the more experience with complexity we have, the more aware we are of how inexperienced we remain and of new difficulties ahead. As a wit would have it, such is peer-review by reality. Always having new questions to ask is only an epiphenomenon of persisting inexperience and difficulty.

Principal sources

von Uexkull, N. and H. Buhaug (2021). “Security implications of climate change: A decade of scientific progress.” Journal of Peace Research 58(1): 3 – 17.

Previous blog entries: “What am I missing?,” “Poverty and war,” “Some answers,” “Short and not sweet,” “Inexperience and central banks,” “Difficulty at risk and unequal”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s