J.S. Cruz

AI and our language

With the appearance of ChatGPT, there’s been renewed mainstream interest in natural language processing deliverables.

Everywhere you look, you always see the same term used: “the AI decides”, “the AI thinks”, “what if AI wants”, “how does AI understand”, “AI goes rogue”, etc.

The reason for this is not hard to understand. AI with human characteristics has been entrenched in our collective imagination of what AI is — from written science fiction to romantic movies — since the very beginning of the field, with the advent of the Turing test, which consists of a human doing a very human task with a program, namely “conversing” with it.

Maybe “human” AI was at the core of primordial AI; however, the field as it is now is something very different: under all the algorithms, be they simple or very advanced, there is simply pattern recognition and synthesis — generating new instances that fit a learned previously-shown pattern.

Although the real field is completely different from what it was at the beginning, we are somehow stuck with the same language: “what if AI doesn’t like us”, “how should we talk to AI”.

This anthropomorphising language is, I think, the single biggest impediment to modern AI progress and research, precisely because it shapes the conversation around “which bounds should be pushed”. Note that (to the best of my knowledge, I admit), we don’t have this problem with any other category of software, even when their impact potential is as great as current AI. Let’s limit progress in SQL query optimizers because the abuse potential of having fast databases is too great?

Use of anthropomorphising language is a very basic domain error. I think it’s obvious, but let’s make it very explicit: a computer program does not think, does not converse, does not understand, does not want anything. We use it because it’s a handy descriptor for what we’re observing when Bing outputs “I have been a good Bing 😀”, but it leads us completely down the wrong path if we are to understand what is happening behind the scenes, which is just text generation to fit a pattern, where the selected text has been calculated to maximize some function.

The proper verbs to use are “calculate” or “compute”, because this is all a computer (and, by extension, any computer program) does, and all it can do, no matter how much the text it outputs tugs at our heartstrings.

Even the field of AI ethics is a misnomer: AI doesn’t have ethics, any more than a hammer or a vacuum cleaner do. Humans have ethics, and what needs to be discussed is what we do with the tools we have. Stopping research into AI or banning algorithms (short step) because they “can be misused” is an argument that is applicable to every single technological development and, despite that, we’ve been alright. Let’s instead limit the application, by humans, of this tool in very egregious domains; I can start start with some low-hanging fruit: ban its use in city-wide facial recognition software.

Use of this language for its convenience does more harm than good, especially when the listener is non-technical, or not versed in the field. The end result is that you are forming in other people a mental model which has no relation to reality, which leads to very interesting Time op-eds.

Tags: #ai #philosophy