Wednesday 20 December, 2023

Bagel-Land

I’ve never liked bagels. On the other hand, I’ve never seen ones like these. Still, I gave them a miss.

Seen in central London, last week.


Quote of the Day

From Politico:

”British Foreign Secretary David Cameron and German Foreign Minister Annalena Baerbock on Sunday called for a “sustainable cease-fire” in the Middle East, lamenting that “too many civilians have been killed” in the Israel-Hamas war.”

What, then, one wonders, is the correct number of civilians to be killed?


Musical alternative to the morning’s radio news

Ralph Vaughan Williams | Fantasia on a Theme by Thomas Tallis | Neville Marriner and the Academy of St-Martin-in-the Fields

Link

I’ve always loved this, but it was especially welcome yesterday as a respite from a dank, dismal, East Anglian winter’s day.


Long Read of the Day

John Quiggin: Training my replacement?

Like everyone else, John Quiggin is interested in Large Language Models.

But first,

I have an urgent article to write, so of course I’m irresistibly moved to do anything else. Following the precepts of creative procrastination, I’ve dealt with a bunch of administrative tasks, done some chores and resisted the urge to dive into social media (until now!). Having done all that, I decided to check on progress in Large Language Models.

What I’ve been interested in since the sudden rise of LLM is whether I could use it to turn out pieces in my own style, recycling and paraphrasing some of the millions of words I’ve typed over my career (my target of 500 words per working day would imply 4 or 5 million in the corpus, not counting blog posts and comments, snarky tweets and who knows how many emails).

We’re not quite there yet, but getting closer. I asked ChatGPT to “Write a critique of SMRs [Small Modular Reactors] in the style of John Quiggin”.

Here’s what he got.

Read on to see what he made of it. And savour the graphic ChatGPT produced in response to the prompt: “produce an image of John Quiggin with his brain hooked up to a computer connected in turn to a printer spooling paper. Style dramatic and futuristic, with a comic element”.

Since DALL-E doesn’t do real people anymore, it went for a generic academic instead. Doesn’t look a bit like Quiggin — or me either, for that matter.


Books, etc.

Dylan Thomas: Notes on the Art of Poetry

I could never have dreamt that there were such goings-on
in the world between the covers of books,
such sandstorms and ice blasts of words,
such staggering peace, such enormous laughter,
such and so many blinding bright lights,
splashing all over the pages
in a million bits and pieces
all of which were words, words, words,
and each of which were alive forever
in its own delight and glory and oddity and light.


My commonplace booklet

”A Ball of Brain Cells on a Chip Can Learn Simple Speech Recognition and Math”

When I first read this I was deeply suspicious. But then I read the Abstract of the paper in Nature Electronics that describes the research. It’s intriguing.

Brain-inspired computing hardware aims to emulate the structure and working principles of the brain and could be used to address current limitations in artificial intelligence technologies. However, brain-inspired silicon chips are still limited in their ability to fully mimic brain function as most examples are built on digital electronic principles. Here we report an artificial intelligence hardware approach that uses adaptive reservoir computation of biological neural networks in a brain organoid. In this approach—which is termed Brainoware—computation is performed by sending and receiving information from the brain organoid using a high-density multielectrode array. By applying spatiotemporal electrical stimulation, nonlinear dynamics and fading memory properties are achieved, as well as unsupervised learning from training data by reshaping the organoid functional connectivity. We illustrate the practical potential of this technique by using it for speech recognition and nonlinear equation prediction in a reservoir computing framework.

A very different approach to ‘AI’.


Linkblog

Something I noticed, while drinking from the Internet firehose.

Google’s blooper

Google announced Gemini, its supposed rival to GPT-4. It launched it with this impressive video.

Link

It’s really interesting, isn’t it?

Yes, but there’s a problem… As Bloomberg’s Pammy Olson put it:

”In reality, the demo also wasn’t carried out in real time or in voice. When asked about the video by Bloomberg Opinion, a Google spokesperson said it was made by “using still image frames from the footage, and prompting via text,” and they pointed to a site showing how others could interact with Gemini with photos of their hands, or of drawings or other objects. In other words, the voice in the demo was reading out human-made prompts they’d made to Gemini, and showing them still images. That’s quite different from what Google seemed to be suggesting: that a person could have a smooth voice conversation with Gemini as it watched and responded in real time to the world around it.”

Talk about shooting yourself in both feet.

It’s truly weird. As the ever-astute Ben Thompson observed:

“Google, given its long-term advantages in this space, would have been much better served in being transparent, particularly since it suddenly finds itself with a trustworthiness advantage relative to Microsoft and OpenAI. The goal for the company should be demonstrating competitiveness and competence; a fake demo did the opposite.”


This Blog is also available as an email three days a week. If you think that might suit you better, why not subscribe? One email on Mondays, Wednesdays and Fridays delivered to your inbox at 6am UK time. It’s free, and you can always unsubscribe if you conclude your inbox is full enough already!