Friday 6 January, 2023

Remembering Jim and Helen

In 1956 Jim and Helen Ede bought four tiny cottages in Kettle’s Yard in central Cambridge and transformed them into an idiosyncratic and charming house — and a place to display Jim’s collection of early 20th-century art. They kept ‘open house’ each afternoon, giving any visitors, particularly students, a personal tour of the collection and eventually gave the house and collection to the University of Cambridge after which they moved to Edinburgh in 1973. Their house became the seed-crystal for a lovely museum which is one of the joys of living in Cambridge.

Quote of the Day

”There are two modes of transport in Los Angeles: car and ambulance. Visitors who wish to remain inconspicuous are advised to choose the latter.”

  • Fran Lebowitz

Musical alternative to the morning’s radio news

Bob Dylan | Like a Rolling Stone


Long Read of the Day

ChatGPT is a bullshit generator. But it can still be amazingly useful

Entertaining piece on the current obsession du jour by Arvind Narayanan and Sayash Kapoor. And apparently they wrote it all by themselves.

The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it’s always convincing, so it’s hard to tell the difference.

Yet, there are three kinds of tasks for which ChatGPT and other LLMs can be extremely useful, despite their inability to discern truth in general:

Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.

Tasks where truth is irrelevant, such as writing fiction.

Tasks for which there does in fact exist a subset of the training data that acts as a source of truth, such as language translation…

Hope you enjoy it. I did. But then I have to write about this stuff, so maybe I have strange interests.

My commonplace booklet

One of the many delights of receiving an email from Cory Doctorow (Whom God Preserve) are the disclaimers at their foot. For example:

For avoidance of doubt: This email does not constitute permission to add me to your mailing list.

READ CAREFULLY. By reading this email, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (“BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

As is the case with every email you’ve ever received, this email has not been scanned for all known viruses.

He also sometimes warns that,


Dunce’s Corner

Many thanks to the numerous readers who spotted unforced error #12,346: My claim yesterday that the Speaker of the House of Representatives was first in line for the Presidency in the event of the death of a sitting president — a heartbeat away, as I put it. Well, actually, as every schoolboy (and girl) knows, the Speaker is second in line, after the Vice-President. Two heartbeats, rather than one. Apologies.

This Blog is also available as a daily email. If you think that might suit you better, why not subscribe? One email a day, Monday through Friday, delivered to your inbox. It’s free, and you can always unsubscribe if you conclude your inbox is full enough already!