No such thing as a free lunch
Seen on a street in Brighton in March 2012.
Quote of the Day
”Reality has no sense of plot, timing or strategy. It just goes on.”
- Jack Watling in Statecraft: The New Rules of Power in a Divided World
Musical alternative to the morning’s radio news
Charles Brown | Driftin’ Blues
Long Read of the Day
Mythos and the mispricing of everything
The AI firm Anthropic has developed a new model called Mythos, but they haven’t released it to the public. Why? Because Mythos is alarmingly good at hacking. Anthropic claims that its cyber-security research abilities are strong enough that they need to give the software industry as a whole time to prepare.
Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.
This perceptive commentary by Azeem Azhar and Greg Williams ponders some of the implications of a new AI tool that Anthropic has created.One of these implications is that the insurance firms that provide cover against hacking will urgently have to conduct an urgent review of the way they estimate the risk of a successful and damaging attack.
The most significant and immediate implication of Mythos is on how we price risk. Critical infrastructure is the asset class that is most exposed and is systemically mispriced. The US utilities sector alone has a $1.5 trillion market capitalization, priced at roughly 22x PE – with cyber risk treated as an incremental operational cost rather than a structural exposure. That frame no longer works in a world where an AI agent can autonomously construct working exploits overnight.
My first thought when the story broke is that Anthropic might be hyping up the significance of their new model by implying that it was too dangerous to release. (We’ve seen that stunt before with early versions of OpenAI’s models.) But Simon Willison, my go-to guru in these matters, doesn’t think it’s a stunt. “There’s enough smoke here,” he writes, “that I believe there’s a fire.”
It’s not surprising to find vulnerabilities in decades-old software, especially given that they’re mostly written in C, but what’s new is that coding agents run by the latest frontier LLMs are proving tirelessly capable at digging up these issues.
I actually thought to myself on Friday that this sounded like an industry-wide reckoning in the making, and that it might warrant a huge investment of time and money to get ahead of the inevitable barrage of vulnerabilities.
For “might” read “will”. This is a big deal.
In polarised times, AI may be the centrist the world needs
Given the existence of what Henry Farrell calls “the AI Fight Club”, I expect that yesterday’s Observer column will get me into trouble with everyone, and especially with those who are critical of AI in its present form. But what the hell: columnists are paid to have opinions.
Technology is not the only reason for the increasing polarisation of democracies. The deeper problem is that these societies have for several decades failed to deliver the shared prosperity on which social cohesion depends. As programmers say: inequality is a feature, not a bug of the neoliberal state. The system has been good for some, but countless others have been “left behind”.
Social media gives them a voice that is then amplified by tabloid media. So the pool of people producing and broadcasting information has dramatically expanded – as have the range of views and narratives to which people are exposed. The result is a perfect storm of misinformation.
In 2025, John Burn-Murdoch, the Financial Times’s data wizard, did an analysis that showed extreme political views and narratives are overrepresented on social media compared with traditional media and cable TV. “Whereas traditional media catered to a range of views,” he reported, “with moderate positions well represented, extreme views – of both left and right – are heavily overrepresented on social media.”
Last month, he did a fascinating experiment that built on the earlier study…
My commonplace booklet
Sheila Hayman (Whom God Preserve) was moved to email after reading Margaret Heffernan’s essay (“Use it or Lose it”) on not outsourcing important cognitive tasks to AI.
Sheet music downloads, as you probably know, are usually delivered as PDFs. When I opened the second fiddle part of the Bruch violin concerto, Adobe’s bot popped up and asked brightly: ’This appears to be a long document. Would you like me to summarise it for you?’ Curious, I agreed and waited, intrigued to know whether it would indeed be able to reduce one of the glories of the repertoire to a few notes.
After a few minutes of furious exertion, it confessed: ‘I’m sorry, this appears to be in a language I don’t recognise.’ A relief on one level, but then I thought, actually, a lot of music – Beethoven’s in particular, as that’s what I’m working on at the moment – is, in effect, an embroidery on a few notes (I attach the first appearance of the famous First Four Notes of the 5th Symphony which happen to have been sent to me this morning and out of which, I’m told by cleverer people, the entire symphony is spun.)
So that seemed to me a very concise summary of these LLMs: they can’t necessarily do all that we can, but because they don’t know their limits – like children, in a way – they may accidentally come up with ideas that I might not have had on my own…
Sheila’s documentary on Fanny Mendelssohn is terrific. Great to learn that she’s now got Beethoven in her sights.
This Blog is also available as an email three days a week. If you think that might suit you better, why not subscribe? One email on Mondays, Wednesdays and Fridays delivered to your inbox at 5am UK time. It’s free, and you can always unsubscribe if you conclude your inbox is full enough already!














