In the Guardian’s online Bookmarks today, the science fiction author, M. John Harrison, answers the following question:
How do you feel about the emergence of AI?https://www.theguardian.com/books/2023/may/20/m-john-harrison-i-want-to-be-the-first-human-to-imitate-chatgpt-wish-i-was-here?utm_term=6469c17386bac9427580944744a8948a&utm_campaign=Bookmarks&utm_source=esp&utm_medium=Email&CMP=bookmarks_email
I’d separate the thing itself from the boosterism around it. We’re at a familiar point on the curve when it comes to the overenthusiastic selling of new scientific ideas, where one discovery or tech variant is going to solve all our problems. I’d say wait and see. Meanwhile I’ll be plotting to outwrite it; I want to be the first human being to imitate ChatGPT perfectly. I bet you it’s already got mimickable traits.
I very much like the last part of that answer. A neat turning of the tables while cleverly not classic oneupmanship.
–But there is that narrative discrepancy, namely: to mimic ChatGPT perfectly. Current research indicates large language models, including ChatGPT, produce unexpected mistakes: “LLMs exhibit unpredictable failure modes,” a recent comparison study phrased it.
Now, to err is human and we commit all manner of mistakes. So in that sense a mistaken ChatGPT is already mimicking and mimicked by humans
–But it’s Harrison’s “perfectly” that should stop us.
Perfectly with respect to what? Perfectly with respect to how a black box large data algorithm gets the answer wrong? Well, that can’t be for at least two reasons. First, all the stuff about our brain having 86 billion neurons and being vastly more complex than said algorithm. Second, black box means black box when it matters, right now: We just don’t know how the current mistake came about–albeit LLM developers have the same remedy: more training of the LLMs.
–Harrison, though, is onto something very important, I think: What’s better than humans mimicking ChatGPT perfectly?
It would be humans managing (not just designing) a ChatGPT that reliably avoids mistakes it can’t otherwise prevent. That is, ChatGPT is managed in real time so as to correct unavoidable mistakes of its own making before answer delivery. The effect of management is that it, the LLM, does this correction just in time even with the current level of training. Now that wouldn’t be science fiction!