On the road

Walking along this road in the Peak District on Saturday I was stopped in my tracks by a Skylark’s song. Which explains today’s Musical alternative
Quote of the Day
”I am lucky to have participated in conversations about the future of AI with executives and builders at frontier labs, economists at AI conferences, AI investors, and other bigwigs at off-the-record dinners where important truths can theoretically be bandied about without risk. And if I had to pick three words to summarize this collective expert view of the future, I could not in a million years, or with a trillion tokens, find three words more suitable than these: Nobody knows anything.”
Musical alternative to the morning’s radio news
Ralph Vaughan Williams | The Lark Ascending | Iona Brown & Neville Marriner | Academy of St Martin in the Fields.
It was inspired by an 1881 poem of the same title by George Meredith. Having read it a couple of times, I think the music wins by a mile.
Long Read of the Day
AI vs. the Pentagon
I guess that most people will regard what’s going on between the American Secretary for War (née Defense) and the AI firm Anthropic is petty arcane. But it’s actually really important and worrying, and I was looking for someone who could put it in a context that would make it understandable, and I found this long blog post by Jasmine Sun which admirably fits the bill.
Who would win in a fight: an alcoholic Fox host with a fetish for extrajudicial airstrikes, or a neurotic Italian-American physicist running an AI company worth $380 billion dollars?
I’ll start with a TL;DR of everything that’s happened. The whole thing plays out like a TV thriller, and I don’t blame anyone not keeping up. (Fellow situation monitorers, feel free to skip ahead to the analysis if you like.)
In July last year, Anthropic signed a $200 million contract with the Pentagon to provide access to Claude. Until recently, Anthropic was the only leading AI lab whose services could be used on classified networks. The company was eager to cooperate with the US military, even partnering with Palantir. But when Claude was used for the January capture of Nicolas Maduro, that allegedly miffed an employee inside Anthropic, which got leaked back to the Pentagon. A pissed-off Pete Hegseth wanted to make super sure that Anthropic was down for anything he wanted, citing “all lawful uses”—which under US military law, means basically whatever. And that was where things got messy.
The thing is, Anthropic’s original DoW contract included two exceptions for military use: their AI could not be used for domestic mass surveillance or fully autonomous weapons. But Hegseth ignored this, demanding that the Pentagon retain full discretion over how they use Claude. When Anthropic said no, he threatened to designate Anthropic a “supply chain risk”: a highest-tier national security designation usually reserved for companies like Huawei run by foreign adversaries. (Even Tencent and DeepSeek are not tarred with this label.) Anthropic was given a strict Friday 5pm deadline to comply with the DoW’s request.
Days passed while Hegseth’s ultimatum hung in the air. Then, on Thursday, Dario Amodei published a statement: “These threats do not change our position: we cannot in good conscience accede to their request.” The AI community praised his courage. For a moment, there was celebration.
Well, Secretary Hegseth was not bluffing. He moved ahead with designating Anthropic a supply chain risk. In a long and dumb tweet, he calls the company’s behavior a “master class in arrogance and betrayal” and “a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” (He also uses the phrase “defective altruism,” which I must admit is pretty good.)
But the implications are severe…
They sure are — for democracy, human safety and security. Do read on.
Footnote For other worthwhile takes on the crisis, see Henry Farrell on why the tech industry should fear this precedent and Jack Shanahan on what makes this different.
The Intention economy
My Observer column of 20 February.
Did the advent of chatbots and LLMs (large language models) herald the demise of the attention economy? And, if so, what might replace it?
The most interesting answer to that question I’ve seen comes in a paper by two Cambridge researchers, Yaqub Chaudhary and Jonnie Penn, in the Harvard Data Science Review. Their thesis is that we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying cinema tickets to voting for political candidates.
They call this the “intention economy”: a marketplace for behavioural and psychological data that signals human intent. It goes beyond capturing attention, to capturing what users want and “what they want to want” and operates through natural language interfaces powered by LLMs…
Do read the whole piece.
pdf version here
My commonplace booklet
From Azeem Azahr:
Defence Secretary Pete Hegseth designated Anthropic a national security supply chain risk, effectively barring federal contractors from using its technology in government work. Hours later, Trump directed every federal agency to follow suit. No Chinese AI company has received the same designation. It’s quite an astonishing sucker-punch.
The proximate cause was Anthropic’s refusal to lift all safety constraints on military use of Claude, around autonomous targeting and AI-assisted mass surveillance. These aren’t unreasonable positions; they reflect genuine technical concerns about where AI capability ends and unacceptable risk begins. But the punishment for holding them was disproportionate, a tool designed for compromised semiconductor supply chains and foreign hardware manufacturers, repurposed to punish an American AI lab.
This seems to me a very good reason for supporting Anthropic — and for using (and paying for) Claude.ai — which I’ve done almost from the outset.
This Blog is also available as an email three days a week. If you think that might suit you better, why not subscribe? One email on Mondays, Wednesdays and Fridays delivered to your inbox at 5am UK time. It’s free, and you can always unsubscribe if you conclude your inbox is full enough already!


















