Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done

Today’s Observer column:

Our tendency to humanise large language models and AI is daft – let’s worry about corporate grabs and environmental damage.

How can we make sense of all this craziness? A good place to start is to wean people off their incurable desire to interpret machines in anthropocentric ways. Ever since Joe Weizenbaum’s Eliza, humans interacting with chatbots seem to want to humanise the computer. This was absurd with Eliza – which was simply running a script written by its creator – so it’s perhaps understandable that humans now interacting with ChatGPT – which can apparently respond intelligently to human input – should fall into the same trap. But it’s still daft.

The persistent rebadging of LLMs as “AI” doesn’t help, either. These machines are certainly artificial, but to regard them as “intelligent” seems to me to require a pretty impoverished conception of intelligence…

Read on…