I
People seeking to control uncertain task environments pay a “control premium,” like the poverty premium (where poor people have to pay more for key services, such as insurance, credit, energy, shelter). An exclusive focus on control strategies cost them—and us—more than would be the case were they able to manage in the face of uncertainty. It’s like they are demanding money with menaces from us.
II
Here’s a different analogy to reinforce the point.
Compare algorithmic decisionmaking (ADM) and the current technology for gene editing known by the acronym, CRISPR. When it comes to ADM, the worry is that we don’t know how the algorithm works. What’s happening, we ask, because of the cultural biases imported via the original data into the algorithm? As for CRISPR, the worry is that, even when we know that this rather that gene is being edited, we’re still not sure it’s the right thing to do.
Suppose we had a CRISPR for ADM: We could go into the algorithm and excise cultural bias. But even then you’d have to worry about, e.g., what is bias to some is not to others. For that matter, is there any doubt that a new mechanism promising greater control in addressing one worry won’t produce another worry, equally if not more important?
The upshot: Control cannot answer the questions control poses.
III
So what?
It’s hard to believe, by way of answer, that all the talk about artificial intelligence (AI) “controlling” behavior will not need to be far more nuanced and contextualized for policy and management.
Consider the already-highly automated underwater oil and gas exploration. Alarms produced by autonomous systems during turbulent seas have turned out to be false alarms occurring. Indeed, and this is the point, operating at a higher level of autonomy and having to cope with recurring false alarms may no longer permit the real-time operators to revert, just-in-time, to lower levels of autonomy, e.g., managing (not “controlling”) via more manual operations, as and when nothing else works.