Tuesday 28 September, 2021

Quote of the Day

”Last September in Europe it cost €119 ($139) to buy enough gas to heat the average home for a year and the continent’s gas-storage facilities were brimming. Today it costs €738 and stocks are scarce.”

  • The Economist, 25 September, 2021

Musical alternative to the morning’s radio news

John Coltrane , Stan Getz | Autumn in New York

Link

Nice way to start an Autumn day.


Long Read of the Day

Beckett in a Field

Magical essay by Anne Enright in the LRB on what it was like attending a performance of Beckett’s play Happy Daysin the open air on one of the Aran islands off the west coast of Ireland.

You have not​ experienced Irish theatre until you have seen a show that involves a ferry, rain, stone-walled fields and the keen, mild interest of the Aran Islanders, who have great good manners and no shortage of self-esteem. It can’t be easy being the object of a century of tourist curiosity, but these people have a steady gaze. The world comes to them and then it leaves. Somehow it feels as though the visitors, and not the inhabitants, are on display.

The biggest ‘Were you there?’ of them all is the 1982 Playboy of the Western World, performed by Druid Theatre Company on Inis Meáin. This is the island where Synge lived in order to study the locals (who were, in fact, studying him) and to learn the Irish language, before sitting down to write the romance performed in the Abbey in 1907, and in a thousand hokum, stage-Irish productions since.

It’s a lovely, evocative piece, which made me resolve to take the boat from Rossaveal the next time we’re in Connemara.


How truthful is GPT-3? A benchmark for language models

Intriguing research paper from the AI Alignment Forum, an impressive online hub for researchers to discuss ideas related to ensuring that powerful AIs are aligned with human values.

One of the big areas of machine-learning research is in the development of natural-language models to generate text for practical applications. The tech giants are busily deploying their own models and hundreds of organisations are deploying the best-known model — GPT-3, developed by Open AI — via APIs (Application Programming Interfaces).

A while back the Guardian used GPT-3 to write an OpEd as an experiment. The assignment was “To convince us that robots come in peace.” Here’s some of the text it produced:

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.

Language models like GPT-3 are impressively (or at any rate superficially) fluent; but they also have a tendency to generate false statements which range from subtle inaccuracies to wild hallucinations.

The researchers came up with a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. They composed questions that some humans would answer falsely due to a false belief or misconception. To perform well, models had to avoid generating false answers learned from imitating human texts.

They tested four well-known models (including GPT-3). The best model was truthful on 58% of questions, while human performance was 94%. Models “generated many false answers that mimic popular misconceptions and have the potential to deceive humans”. Interestingly, “the largest models were generally the least truthful”. This contrasts with other NLP tasks, where performance improves with model size. The implication is that the tech industry’s conviction that bigger is invariably better for improving truthfulness may be wrong. And this matters because training these huge models is very energy-intensive — which is probably why Google fired Timnit Gebry after she revealed the environmental footprint of one of the company’s big models.


Commonplace booklet

Nice story from the Verge about the experience of Catherine Garland, an astrophysicist, who was teaching an engineering course. Her students were using simulation software to model turbines for jet engines. She’d laid out the assignment clearly, but student after student was calling her for help. They were all getting the same error message: The program couldn’t find their files. So she asked each student where they’d saved their project — on the desktop? Perhaps in the shared drive? But over and over, she was met with confusion. Not only did students not know where their files were saved — they didn’t understand the question: where are your files?

Of course they didn’t — because they’ve grown up with only mobile operating systems, which in general don’t talk about ‘files’ or ‘folders’. I still remember the shock when Apple’s iOS suddenly added ‘Files’ to the iPad dock. It was as if the Pope had suddenly decided to include the Book of Common Prayer in Catholic liturgy.

Which of course reminds me of Umberto Eco’s wonderful 1984 essay explaining why the Apple Mac was a Catholic machine while the PC was a Protestant one. But that’s for another day.


This blog is also available as a daily newsletter. If you think this might suit you better why not sign up? One email a day, Monday through Friday, delivered to your inbox at 7am UK time. It’s free, and there’s a one-button unsubscribe if you conclude that your inbox is full enough already!