“Soon, no human will know the answer”: AI ethics


A good friend wasn’t trying to be provocative when he argued that a clear sign a field had lost its energy was when its discussions were overtaken by ethics. If it’s energy you’re looking for, he went on, look to the edges with other fields in competition with it. His example was Herbert Simon’s move into artificial intelligence.

So, as a thought experiment, let’s ask: With all this attention to AI ethics, is AI actually a moribund field in ways not supposed?

As ethicists are also talking about sub-fields like machine learning (ML) and algorithmic decisionmaking (ADM), are these moribund in ways we–that is, those of us who become instant experts in AI by reading the secondary literature–do not comprehend?


For example, rapid obsolescence of software and equipment used in ML and ADM is a topic that, at least to this point (and I stand to be corrected), hasn’t been given as much attention as readers might expect. To my mind, this topic is more important that transparency or fairness, since obsolescence changes the “with-respect-to-what’s” of the latter.

So what? Just what analytic purchase do we get parsing AI ethics through the lens of obsolescence?


Well, one thing you get is a track record. Here is W. Daniel Hillis, computer scientist and inventor, writing in 2010:

I want to be clear that I am not complaining about technical ignorance. In an Internet-connected world, it is almost impossible to keep track of how systems actually function. Your telephone conversation may be delivered over analog lines one day and by the Internet the next. Your airplane route may be chosen by a computer or a human being, or (most likely) some combination of both. Don’t bother asking, because any answer you get is likely to be wrong.

Soon, no human will know the answer.


Exactly the kind of not-knowing that AI portends has been going on for years.

What then is the record of all this and other such software being replaced or upgraded? Is it that the software was no longer working or that something better came along, or both or something else altogether? In short: How would studying this track record not contribute to really-existing AI ethics?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s