If you don’t know Sharon Shannon’s musicianship, then (respectfully suggests) maybe it’s time you did.
Long Read of the Day
“A damn stupid thing to do”—the origins of C
An abridged — but to geeks fascinating — history by Richard Jensen of the evolution of the C programming language. Not for everyone, but if you’re interested in the history of computing, it’s gold dust. And eminently readable. I guess it helps if you’re lucky enough know some of the people in the story (which I do). And I still have my copy of the beautiful Kernighan and Ritchie paperback introduction to the language
Mr. le Carré’s own youthful experience as a British agent, along with his thorough field research as a writer, gave his novels the stamp of authority. But he used reality as a starting-off point to create an indelible fictional world.
In his books, the Secret Intelligence Service, otherwise known as M.I.6., was the “Circus,” agents were “joes,” operations involving seduction were “honeytraps” and agents deeply embedded inside the enemy were “moles,” a word he is credited with bringing into wide use if not inventing it. Such expressions were taken up by real British spies to describe their work, much as the Mafia absorbed the language of “The Godfather” into their mythology.
“As much as in Tolkien, Wodehouse, Chandler or even Jane Austen, this closed world is a whole world,” the critic Boyd Tonkin wrote in The Independent. “Via the British ‘Circus’ and its Soviet counterpart, Le Carré created a laboratory of human nature; a test-track where the innate fractures of the heart and mind could be driven to destruction.”
In a career spanning more than a half-century, Mr. le Carré wrote more than two-dozen books and set them as far afield as Rwanda, Chechnya, Turkey, the Caribbean and Southeast Asia…
Great stuff. Worth reading in full.
French fries, Coq au Vin, le weekend and other tricky questions
Further to A Song for Brexit (see last Saturday’s blog) I’ve been pondering the way language and ideology get intertwined. Remember when the French President, Jacques Chirac, refused to back the Bush-Blair invasion of Iraq and enraged US legislators refused to allow Congressional caterers to serve “French fries”? From then on they had to be called “Freedom fries”. (Ironic that, given what happened to Iraq and the Middle East generally as a result of that particular adventure.)
The wicked point of the A Song for Brexit sketch was that if the UK left Europe then the French wanted their words back. No more ‘joie de vivre’, RSVP’ or ‘cul-de-sac’, among many others. The problem is that, as some wag once observed, “French is spoken in every language.” The only English word I would think of that the French had appropriated was “Weekend”. (I know: there are probably others, but I couldn’t think of them at the time.)
Yesterday, after a bout of nostalgia triggered by a nice email from an academic colleague who had decided to repair to his holiday house in France until the UK finally sorted out what it was going to do with the virus, I set to and cooked Coq au Vin for supper. But when my wife was putting some of the surplus into the freezer for subsequent consumption, she began to write Coq au… on the label and then paused. Should it henceforth be merely Chicken Stew?
Doesn’t have quite the same ring to it, does it? And she can’t call it Chicken Casserole either. Hmmm…
Other, hopefully interesting, links
Listen to Barack Obama reading the Preface to his memoir. The audio version gives you a good sense of the man. Jason Kottke thinks it’s better than reading the book. I can believe it. Link.
Kazakhstan’s President is addicted to photoshopping his image. Nice piece on Motherboard.
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!
The Law Faculty building in Cambridge (now the Sir David Williams Building after my late friend and mentor). It always reminds me of a beached cruise liner.
Quote of the day
“Boats against the current are the only kind I should choose to embark on; the going’s tough, but your fellow passengers are better company than you will ever find going with the flow.”
Frederic Raphael
Musical alternative to the morning’s radio news
“The Thrill Is Gone” | BB King, Eric Clapton, Robert Cray and Jimmi Vaughn
On Tuesday, in a rare break with recent practice, a branch of the UK government did something clever. The Competition and Markets Authority (CMA) outlined plans for an innovative way of regulating powerful tech firms in a way that overcomes the procedural treacle-wading implicit in competition law that had been designed for an analogue era.
The proposals emerged from an urgent investigation by the Digital Markets Taskforce, an ad hoc body set up in March and led by the CMA with input from the Information Commissioner’s Office and Ofcom, the telecommunications and media regulator. The taskforce was charged with providing advice to the government on the design and implementation of a pro-competition regime for digital markets. It was set up following the publication of the Treasury’s Furman review on unlocking digital competition, which reported in March 2019 and drew on evidence from the CMA’s previous market study into online platforms and digital advertising.
This is an intriguing development in many ways. First of all it seems genuinely innovative – unlike this week’s antitrust lawsuits brought against Facebook in the US…
If this survives the ‘consultation’ (i.e. lobbying) phase and makes it onto the statute book, then things could get interesting.
The Facebook Oversight Theatre Show goes live
In an entertaining Guardian report a member of Facebook’s ludicrous ‘Oversight’ Board says that it “won’t shy away” from tackling Trump-style disinformation.
This month, the 20-member board – made up of academics, lawyers, politicians and journalists from across the world – announced the first six cases it would review over the next 90 days.
One of the cases involves a two-year-old post of an alleged quote from Joseph Goebbels, the propaganda minister of Nazi Germany. The post, which spells out the need to appeal to emotions and instincts, instead of intellect, and on the unimportance of truth, was removed by Facebook for violating its policy on dangerous individuals and organisations. The user who reshared the post has appealed on the grounds that Trump was, in their view, following a similar fascist model.
The Board’s remit is limited to content that has been removed by Facebook. Its sole Australian member, Nic Suzor, admitted to the Guardian that this is likely to be “problematic” when it comes to addressing disinformation posted by politicians, such as US president Donald Trump posting false information about election fraud. Currently Facebook puts warning labels on this kind of post, rather than removing it. “The only way that we can handle cases where Facebook has decided not to remove something is if Facebook refers it to us,” he said.
Writing in the Columbia Journalism Review Tow Center Director Emily Bell was distinctly underwhelmed by this charade:
In today’s information ecosystem, technology platforms like Facebook are not just the arbiters of truth; they are also the setters of norms, the weather vanes of taste, and the guardrails of democracy. And, in an increasing number of places, they are the instruments of oppression. As such, perhaps the most striking feature of the board’s first set of cases is the lack of ambition in their subject matter.
If a panel of global experts really needs three months to decide if it is acceptable to show a naked boob in pursuit of cancer prevention, then the Oversight Board’s hope of creating lasting impact is doomed from the outset. Issues of contextual nuance might represent interesting cases, but they are not “hard” in the way that, say, the mass removal of posts in compliance with repressive speech laws is hard. Yet cases concerning the latter are unlikely ever to reach the Oversight Board. In fact, in the board’s charter, Article 2, on the “scope” of the board’s activities, states: “In limited circumstances where the board’s decision on a case could result in criminal liability or regulatory sanctions, the board will not take the case for review.” In other words, if a removal is in compliance with the law of a country, then it will not be reviewed.
On the same day the Facebook Oversight Board launched, Amnesty International published a damning report on how aggressive new censorship laws in Vietnam are stifling citizens, the free press, and activists—with the compliance of technology companies like Google and Facebook. Facebook reports a 983 percent increase in content restrictions in Vietnam since the tightening of laws in April, pushing the number of restricted and deleted posts up from 77 to 834 in the space of a year.
This Board is rather like the ‘ethics’ boards that companies are setting up in a desperate attempt to avoid legal regulation. What those boards do is Ethics Theatre. By setting up this ludicrous board, Facebook is now engaging in Oversight Theatre. Quelle Surprise!
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!
On Tuesday, in a rare break with recent practice, a branch of the UK government did something clever. The Competition and Markets Authority (CMA) outlined plans for an innovative way of regulating powerful tech firms in a way that overcomes the procedural treacle-wading implicit in competition law that had been designed for an analogue era.
The proposals emerged from an urgent investigation by the Digital Markets Taskforce, an ad hoc body set up in March and led by the CMA with input from the Information Commissioner’s Office and Ofcom, the telecommunications and media regulator. The taskforce was charged with providing advice to the government on the design and implementation of a pro-competition regime for digital markets. It was set up following the publication of the Treasury’s Furman review on unlocking digital competition, which reported in March 2019 and drew on evidence from the CMA’s previous market study into online platforms and digital advertising.
This is an intriguing development in many ways. First of all it seems genuinely innovative – unlike this week’s antitrust lawsuits brought against Facebook in the US…
Well, sometimes the only thing to do is laugh. And this is lovely.
Quote of the Day
“Worst damn fool mistake I ever made was letting myself be elected Vice President of the United States. Should have stuck as Speaker of the House. gave up the second most important job in Government for eight long years as Roosevelt’s spare tire.”
John Garner, VP under FDR 1933-41.
Musical alternative to the morning’s radio news
Schubert Ständchen | Camille Thomas and Beatrice Berrut
Delightful essay by Ashutosh Jogalekar in 3 Quarks Daily
I’ve always been captivated by Isiaih Berlin’s famous distinction (which he got from an Ancient Greek philosopher) about there being two kinds of thinker — hedgehogs (who know only one big thing) and foxes (who know many little things). On that scale, I’m a fox. But when I was thinking about this (and the relevant meditation is in my lockdown diary) it occurred to me that I know people who are sometimes hedgehogs and sometimes foxes.
This essay is an elegant disquisition on an analogous dichotomy proposed by Freeman Dyson, who argued that science thrives on the interplay between birds, who “look at the big picture and survey the landscape from a great height”, and frogs, who “love playing around in the mud of specific problems, delighting in finding gems”. Newton and Einstein were birds. Hubble and Fermi were frogs. But Planck was a frogbird.
Hope you enjoy it as much as I did
Norman Abramson, surfer (and pioneer of wireless networking) RIP
Norman Abramson, who built the world’s first packet-switched wireless network, has died at the age of 88. Steve Lohr has written a nice obit in the New York Times. I first came across him when I was doing the research for my history of the Internet. When the ARPAnet (the precursor of the modern Internet) was being designed in the late 1960s it used telephone landlines to connect its nodes. But Norman was a professor at the University of Hawaii and decided that the connection between his node and the network would have to be a wireless one. With Frank Kuo, a former Bell Labs scientist who came to the University of Hawaii the same year as him (1966) he built such a network.
The design challenge they faced was how to enable multiple devices to reliably to send and receive data packets over a shared radio channel. The key innovation Abramson and Kuo came up with was to divide the data into packets which could be re-sent if the data was lost during transmission, allowing for random access rather than sequential access to the channel. The resulting radio network technology they developed was deployed as ALOHAnet in 1971. The name derived from Aloha, a Hawaiian greeting.
It proved to be a fruitful idea. In 1972, Bob Metcalfe was working in the Computer Science Lab in Xerox’s PARC, trying to design a wired system for connecting the computers and other devices (for example, laser printers) that the PARC team were building at the time. He came on a 1970 paper by Abramson outlining the idea for sending and re-sending, got in touch with him and was invited to spend a month with at the University of Hawaii. From that came two things: one was Metcalfe’s PhD thesis, which was about ALOHAnet; the other was one of the key features of the Ethernet networking system that Metcalfe then co-invented at PARC with Dave Boggs, Chuck Thacker and Butler Lampson. A central idea in the technology was what the inventors called carrier sense multiple access with collision detection (CSMA/CD); this is what enabled devices to communicate on a shared wire without the earlier system (developed by IBM, I think) of a rotating ‘token’ that a device had to capture before it was allowed to send.)
The reason Abramson wound up at the University of Hawaii was wonderfully serendipitous: during a stop-over on a flight from Tokyo he rented a surf-board, learned to surf, was transfixed by the experience and decided he wanted to work somewhere where he could combine communications research with surfing. For many subsequent years, he surfed every single day.
The Computer History Museum had an event to mark the 50th anniversary of ALOHAnet . They recorded a lovely video in which Abramson and Kuo tell the story of how they built the network. It’s over an hour long, so probably only for those for whom the history of the Internet is their thing. Needless to say, I loved it.
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!
What the data shows is that during the first week after getting their shots, both groups of people kept getting covid-19 at about the same rate. But after that, the lines start to separate. And they just keep separating and separating.
That’s the result of the vaccine taking effect, which usually takes a few days and gets boosted by a second dose. After two weeks, hardly anyone with the vaccine was getting covid-19. But the disease kept striking those who got the placebo with clockwork regularity.
“No comment. This is what vaccines do,” said Florian Krammer, a prominent immunologist, who posted a version of the image to Twitter.
Planxty is the best Irish folk group of my lifetime. And at the heart of the group was the piper Liam Ó Flynn. Despite the pace of the music, he always had a strange kind of impassive dignity.
Henri, le Chat Noir, angst-ridden feline YouTube star est mort
We are très désolé to report that YouTube cat-video sensation Henri, le Chat Noir has died at the ripe old age of 17. His collaborator Will Braden, aka the “thieving filmmaker,” announced Henry’s passing in a moving Facebook post. Apparently, Henri had a deteriorating spinal condition and had been rendered largely immobile as a result. Despite the pandemic, a local vet made a home visit to “help him pass peacefully, surrounded by those that loved him,” Braden wrote.
Henri (née Henry) was not actually Braden’s cat; the Facebook post identifies Braden’s mother as Henri’s real-life caretaker. Henri lived in an undisclosed location in Seattle’s North End, largely oblivious to his online celebrity. He was a rescue cat, adopted from a local animal shelter as a kitten, who shared his living space with a second white cat, known to his fans as ‘l’Imbecile Blanc,” who survives him. While a student at the Seattle Film Institute, Braden noted Henri’s “regal presence and distinguished personality,” and he featured the cat in a short film for class. The video hit YouTube on May 24, 2007, and Henri’s existential musings soon began winning enthusiastic fans.
There’s a firm that’s grown faster than any firm to date. Its founder also set the DNA of the firm, but without the benefit of the modulation and self-awareness that come with age. It’s in a sector where network effects created a handful of organisms of unprecedented scale. There has never been an organization of this scale and influence, that is more like its founder, than Facebook. I know, you’re thinking, “What about the Catholic Church?” Nope. Numerous acts of violence against children, coupled with institutionalized cover-ups, mean the acorn has fallen pretty far from the tree (Jesus).
Here’s the rub: Mark Zuckerberg is a sociopath, and Facebook has institutionalized sociopathy. To understand sociopaths, according to the quirky psychologist on my new favorite show, Fleabag, you need to take things away, not add them. There is no empathy, no emotion, nothing. According to a less entertaining, but likely more credible source, Psychology Today:
Sociopathy is an informal term that refers to a pattern of antisocial behaviors and attitudes. In the Diagnostic and Statistical Manual of Mental Disorders (DSM), sociopathy is most closely represented by Antisocial Personality Disorder. Outwardly, those described as sociopaths may appear disturbed but can also show signs of caring, sincerity, and trustworthiness. In fact, they are manipulative, often lie, lack empathy, and have a weak conscience that allows them to act recklessly or aggressively, even when they know their behavior is wrong.
The above makes for a decent blurb for Zuck for his upcoming 20-year high school reunion. Maybe also something about him learning Mandarin or some such…
Magnificent stuff. Do read it all.
Seven things to consider when travelling to Europe from January 1, 2021
Helpful advice from Politico for hapless UK-resident Europhiles. Don’t forget your Green Card. Make sure you have health insurance. You may need to get an International Driving Licence (and only a third of UK post offices are able to supply those). Etc., etc.
Remember how Brexit was going to slash red tape.
What is it that Foreign Exchange dealers know that we don’t?
My career started in the late 1970’s working in the foreign exchange department of the Bank of America in the City of London. Lovely people, well paid work and a bunch of dealers who would buy or sell their mother on a ten point spread and many of whom only ever read the sporting pages of The Sun.
It was also a very laissez faire and realistic place, currencies were worth what someone would pay for them, nothing more nothing less. Which is why the fall of sterling in the last 4 years should make you sit up and take notice; it has fallen from well over 1.30 against the Euro to 1.10. With the expectation that there may still be a deal included in that price, if there is no deal by Sunday the markets will open on Monday to yet another fall. Depending on where you start your calculations and where the pound ends up, that is a fall of between 15% and 20%.
Now ask yourself this: Why do thousands of hard nosed, free market loving dealers, from all over the world think the UK’s currency is worth so much less than 4 years ago and why will a no deal Brexit make that worse?
Other, hopefully interesting, links
Physicists solve 150-year-old mystery of equation governing sandcastle physics. It’s the Kelvin Equation. Link
Toyota claim to have made a big breakthrough in battery technology: solid state; 500km on a charge; charge time 10 minutes. A big deal if they can deliver on it. Link
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!
Today, 48 state attorneys general, plus Trump’s Federal Trade Commission, filed antitrust suits against Facebook.
There are two complaints, one from the states and one from the FTC. The state AG complaint is stronger, but both tell the same story. Facebook bought Instagram and WhatsApp to stop nascent competitors from challenging its monopoly power in social networking. It also used a variety of other tactics to foreclose competitors it could not buy from entering the market and challenging its dominance. Then, after it became a monopoly, it increased prices or downgraded user experiences to profit from the conspiracy it had arranged.
The narrative comes from legal scholar and former ad executive Dina Srinivasan’s remarkable 2019 paper on Facebook. In her analysis, Srinivasan showed that Facebook actually beat out MySpace by offering users a product differentiated with better privacy guarantees. But after monopolizing the market and killing its competitors, Facebook immediately started degrading the quality of the product with intrusive surveillance of its users, contra their wishes.
This could conceivably be a big moment in the move to bring big tech companies back under some kind of control. But Matt Stoller could be a tad over-optimistic about the likelihood of this suit succeeding any time soon.
Chris Nuttall on the Facebook antitrust suit
Writing in today’s FT, Chris observes that
Facebook looks exposed and unprepared in the face of a concerted attack launched on Wednesday by the Federal Trade Commission and 46 US states aimed at breaking up its empire.
Its problem is the lack of integration of the social network with the photo-sharing service Instagram and WhatsApp messaging platform it acquired. They are distinctive brands that can be used without recourse to Facebook itself and thus can be easily separated if the FTC gets its way and forces Facebook to divest the two services.
That would be deeply damaging. Facebook has failed to reinvent its core service to stay relevant to changing user habits. Its social network has been short on innovation, either copying or buying services tapping the latest trends. Its pivots towards photos and mobile messaging groups were well-timed. They are faster-growing businesses, but it may now be forced to cash out those strong bets on the future…
How Apple is organised
Fascinating article in the Harvard Business Review.
The secret is simple really: don’t have general managers.
Apple is not a company where general managers oversee managers; rather, it is a company where experts lead experts. The assumption is that it’s easier to train an expert to manage well than to train a manager to be an expert. At Apple, hardware experts manage hardware, software experts software, and so on. (Deviations from this principle are rare.) This approach cascades down all levels of the organization through areas of ever-increasing specialization. Apple’s leaders believe that world-class talent wants to work for and with other world-class talent in a specialty. It’s like joining a sports team where you get to learn from and play with the best.
Worth reading in full.
Other, hopefully interesting, links
64 Reasons To Celebrate Paul McCartney. By Ian Leslie Link.
The Northern Lights Photographer of the Year for 2020. Amazing photographs. Link. (HT to Jason Kottke, who is always spotting beautiful things.)
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!
A large part of the American public has distrusted “science” since the early 20th century, seeing it variously as a threat to religious beliefs, a disruptor of moral values, and a slippery slope towards a totalitarian state. “A tendency to trace social ills to the cultural sway of an ideologically infected science continues up to our own day, even as the details of the indictment have changed.”
There was only one Brexit deal — ever
It was always wealth vs sovereignty: how much loss of the former in return for how much gain in the latter. Fabulous Guardian column by Rafael Behr.
Short read, and well worth it.
Now we know what went on in Matt Hancock’s secret meeting with Mark Zuckerberg
Great reporting by the Bureau of Investigative Journalism.
Mark Zuckerberg threatened to pull Facebook’s investment from the UK in a private meeting with Matt Hancock, the Bureau of Investigative Journalism can reveal.
The minutes, from May 2018, show that an obsequious Hancock was eager to please, offering “a new beginning” for the government’s relationship with social media platforms. He offered to change the government’s approach from “threatening regulation to encouraging collaborative working to ensure legislation is proportionate and innovation-friendly”.
Hancock sought “increased dialogue” with Zuckerberg, “so he can bring forward the message that he has support from Facebook at the highest level”.
Zuckerberg attended the meeting only days after Hancock – then the secretary of state for digital, culture, media and sport (DCMS) – had publicly criticised him for dodging a meeting with MPs. Civil servants had to give Zuckerberg explicit assurance that the meeting would be positive and Hancock would not simply demand he attend the Select Committee, and noted that the meeting began with an ambience of “guarded hostility”.
The government fought tooth and nail to prevent the Minutes of the meeting being released. In the end, the Information Commissioner ordered their release.
“In the Commissioner’s view the requirement for due transparency and openness is particularly acute in the present case given Mr Zuckerberg’s absence in the UK public domain… In view of the high level of personal control which the Facebook founder and CEO enjoys over some of the most influential and powerful social media platforms in the UK, the Commissioner considers that the demand for such transparency is correspondingly high.”
The Christchurch mass killer was radicalised by YouTube
The New Zealand mosque shooter was radicalised on YouTube: Among the findings of a New Zealand government investigation into the 2019 mass killing in Christchurch was that the shooter had been radicalized more on YouTube than he had in the darker corners of the internet. The Times technology columnist Kevin Roose also has a good Twitter thread on the missed opportunities to take YouTube’s dangers seriously.
But the NZ authorities also came in for criticism, as the New York Timesreports:
Still, the Royal Commission — the highest-level inquiry that can be conducted in New Zealand — faulted the government on several counts. It found that lax gun regulations had allowed Mr. Tarrant to obtain a firearms license when he should not have qualified. And it said that the country’s “fragile” intelligence agencies had a limited understanding of right-wing threats and had not assigned sufficient resources to examine dangers other than Islamist terrorism.
A system mired in bureaucracy and unclear leadership was ineffective. But the two independent commissioners who conducted the inquiry stopped short of saying that the disproportionate focus on Muslims as a potential source of violence had allowed Mr. Tarrant’s attacks to happen.
A page from my Lockdown audio diary
Sunday 29 March — Day 8
There’s a cynical academic joke that you hear in every university. It goes like this: Q: Why are academic disputes so acrimonious? A: Because the stakes are so low.
The point of the joke, I suppose, is to emphasise that professors argue about issues which are of no interest to any normal person — and so in that sense, they’re just contemporary manifestations of those fabled medieval disputes about the number of angels who could dance on the head of a pin. That is to say, arguments about stuff that doesn’t really matter, where the stakes are very low.
As it happens, though, we now find ourselves in the middle of an academic dispute where the stakes could not be higher. The question at issue is how best to combat the Coronavirus — and millions of lives may depend on getting the right answer.
The current contestants in this battle of ideas are teams of researchers from two of Britain’s best universities — Imperial College, London and Oxford. Both have constructed mathematical models of the pandemic which, they hope, enable them to understand the dynamics of its contagion, and also enable them to simulate the likely impact of various policies to manage the outbreak.
A few weeks ago, after the Johnson administration had its “Oh shit this could be really serious moment” you may recall that the Prime Minister started to give daily Press Conferences flanked by two eminent knights who embodied the “scientific advice” that he was determined assiduously to follow. This blogger — and thousands of observers overseas — watched incredulously as these eminences laid out a strategy based on the concept of herd immunity: the idea was that about 60 per cent of the population would need to get the virus first, after which this supposed immunity would kick in.
A quick session with a calculator confirmed the hunch that this idea looked bonkers. Just think about the numbers. The UK currently has nearly 70m inhabitants. 60% of 70m is 42m, most of whom, it was assumed, would only get a mild dose, recover and thereby acquire herd immunity. But if the mortality rate of the virus was one per cent (which was one of the guesses at the time) then that meant that the UK government policy was assuming that 420,000 people might die. At which point even those of us who know nothing about epidemiology but can do simple arithmetic began to wonder what these eminent scientific knights had been smoking.
Clearly, the modellers at Imperial College wondered the same thing, and they spent a frantic weekend running simulations to determine what a less crazy strategy would be — and concluded that ‘containment’ would be not only the best bet, but the only sensible thing to do. Their conclusions seemed to convince Johnson and his advisers, and so over a weekend the government pivoted on a sixpence to a new policy — containment and lockdown in order to prevent our beloved NHS with its 8,000 ventilators from being overcome. Which is how we came to be where we are now and why I am composing this from deepest quarantine.
At this point Oxford University enters the fray. According to a report in last Tuesday’s Financial Times, the Oxford model suggested that the virus may already have infected far more people in the UK than anyone had previously estimated — perhaps as much as half the population. If the results are confirmed, the FT report continued, they would imply that fewer than one in a thousand of those infected with Covid-19 become ill enough to need hospital treatment. The vast majority would develop very mild symptoms or none at all.
The research, observes the FT, presented a very different view of the epidemic to the Imperial College modeling which had such a dramatic influence on government policy. “I am surprised that there has been such unqualified acceptance of the Imperial model,” said Professor Sunetra Gupta, professor of theoretical epidemiology at Oxford, who led the study. Experts in the semiotics of academic warfare will be able to decode that genteel observation. The professor is, er, surprised. It’s a bit like when lawyers say “with the greatest possible respect…”
I have no idea which group of modellers is right. Perhaps neither is. But the interesting thing about the Oxford hypothesis is that it is testable in a way that would have appealed to Karl Popper.
If people have acquired immunity through having had a mild dose of the disease, then they will have antibodies in their blood. There are, I think, recognised tests for detecting these antibodies. So all that is needed is for a research team (it could be from a polling firm like YouGov) to administer this test to a random sample of the UK population. The results would tell us not only if the Oxford conjecture is accurate but also what proportion of the population has immunity. And when we know that maybe we’ll be getting somewhere.
(Oh, and by the way, if you heard the sound of someone clapping, it’ll be the ghost of Karl Popper.)
From 100 Not out! – a Lockdown Diary. If you liked this you can get the book on the Kindle store
Mount Everest is higher than we thought, say Nepal and China.Link
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!
“I would suggest that the only books that influence us are those for which we are ready, and which have gone a little further down our particular path than we have yet got ourselves.”
Terrific account by Cade Metz of how researchers at DeepMind think they have solved “the protein folding problem,” a task that has bedeviled scientists for more than 50 years.
Some scientists spend their lives trying to pinpoint the shape of tiny proteins in the human body.
Proteins are the microscopic mechanisms that drive the behavior of viruses, bacteria, the human body and all living things. They begin as strings of chemical compounds, before twisting and folding into three-dimensional shapes that define what they can do — and what they cannot.
For biologists, identifying the precise shape of a protein often requires months, years or even decades of experimentation. It requires skill, intelligence and more than a little elbow grease. Sometimes they never succeed.
Now, an artificial intelligence lab in London has built a computer system that can do the job in a few hours — perhaps even a few minutes…
Great read. And a nice accompaniment to the astonishing achievement of the Covid vaccine development effort.
Peter Alliss RIP
The Guardian carried the obit that had been written by the late Frank Keating (who died in 2013). It concludes thus:
To the end, he could be outrageously and sharply pointed but also poetically, tellingly simple at the microphone, like this sotto running-commentary advice at Sandwich in 2011 as the young Irishman Rory McIlroy came into view down the fairway: “Just keep playing nicely, gently, m’boy … keep finding the fairways, keep finding the greens … You can’t force this game … some people think you can … some players think they can … but you can’t … Golf is all about patience … Good old-fashioned word ‘patience’ … ask kids today about ‘patience’ and they pull out their iPhones, whatever they are, and say it don’t say anything here about ‘patience’ but I can tell you the population of Madagascar … ”
Alliss was, by all accounts, a born raconteur (in the same genre as his predecessor Henry Longhurst). Mark Townsend in Golf Monthly included Alliss’s anecdote about Bobby Locke, who won the British Open four times:
One of his favourite memories was, again, something quirky rather than the norm. To set the scene the great Bobby Locke had joined him on a patch of rough ground to the right of the 1st fairway at the Old Course to hit a few balls ahead of his opening round in 1957.
“He had about eight balls and he sent his caddy, Bill Golder, who was about 65 then, down on to the beach. We spent the next five minutes chatting about this exhibition match and that exhibition match before I said ‘Well, I must be off’.
“He asked what the time was, I told him it was twenty to and he replied ‘Oh God, I must be off.’ He never hit a ball, he waved to his caddy and he was off. It is bizarre to think these days that there are rows of Titleists and there’s his caddie, who has clambered down across the beach, and he never hit a ball. He went to the 1st tee and went on to win the championship by three shots.”
As someone who was a keen golfer in my undergraduate days, Alliss is a figure from my past. I remember once walking round with him in a tournament — something you could do occasionally in those days, before golf became a TV-dominated sport. He struck me as a handsome, amiable, right-wing buffer who also happened to be a terrific golfer. And he was a terrific commentator on the game.
When they write the history of this era, one of the strangest chapters will be devoted to Uber, a company that was never, ever going to be profitable, which existed solely to launder billions for the Saudi royals.
From the start, Uber’s “blitzscaling” strategy involved breaking local taxi laws (incurring potentially unlimited civil liability) while losing (lots of) money on every ride. They flushed billions and billions and billions of dollars down the drain.
But they had billions to burn. Mohammed bin Salman, the murdering Crown Prince of the Saudi royal family, funded Softbank – a Japanese pump-and-dump investment scheme behind Wework and other grifts – with $80B as part of his “Vision 2030” plan.
Vision 2030 is a scheme to diversify Saudi wealth away from hydrocarbons by attempting to establish monopolies that will allow the family to control entire sectors of the global economy.
These schemes are longshots, and the fallback position is to unload failed monopolies – with staggering debt-overhangs – on investors who’ve been suckered with the promise that really big piles of shit surely have a pony buried underneath them somewhere.
I particularly like his payoff lines…
Every long con needs a “store” – a place where the con plays out, like a fake betting shop where the scammers rope in the mark and fleece them of every dime. But once the con is done, the store has to shut down amid a “blow-off” that lets the grifters escape.
Uber’s shutting down the AV part of its store: they “sold” the division to a startup called Aurora, but the “sale” involves Uber “investing” $400,000,000 in Aurora. That is, they’ve paid someone else to take this bit of set-dressing off their hands.
If you want to learn more about how Uber will never, never, ever, ever be a real business, be sure to tap into transport economist Hubert Horan’s series on the company, which he calls a “bezzle.”
Great stuff.
Another, hopefully interesting, link
The best of the ‘Best Books of 2020’ lists. Curated by Jason Kottke. Link
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!
The year I abandoned my Nikon, it popped up in a surprising place: cognitive science. That year, Joshua D. Greene published Moral Tribes, a work of philosophy that draws on neuroscience to explore why and how we make moral judgments. According to Greene, we make them using two different modes — not unlike a digital camera. “The human brain,” he writes, “is like a dual-mode camera with both automatic settings and a manual mode.” Sometimes, the analogy goes, you want to optimize your exposure time and shutter speed for specific light conditions — say, when faced with a big life decision. Other times, probably most of the time, tinkering with the settings is just too much of a hassle. You don’t want to build a pro-and-con list every time you order at a restaurant, just like you don’t want to adjust the aperture manually for each selfie you take.
Strange goings-on at Google
From this morning’s FT:
How did Google get itself into this mess? A company that is widely seen as having deeper capabilities in artificial intelligence than its main rivals, and which is under a microscope over how it wields its considerable economic and technological power, just had an acrimonious parting of the ways with its co-head of AI ethics.
Timnit Gebru left claiming she was fired over the suppression of an AI research paper. Jeff Dean, Google’s head of AI, said the paper wasn’t fit for publication and Dr Gebru resigned.
Except that she didn’t resign, it seems. She was fired — or, as they say in Silicon Valley without a hint or irony, “terminated”.
So let’s backtrack a bit. Dr Gebru was the joint-leader of Google’s ethical AI team, and is a prominent leader in AI ethics research. When she worked for Microsoft Research she was co-author of the groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of colour — a flaw which implies that its use can end up discriminating against them. She also co-founded the ‘Black in AI’ affinity group, and is a champion of diversity in the tech industry. The team she helped build at Google is believed to be one of the most diverse in AI (not that that’s saying much) and has produced critical work that often challenges mainstream AI practices.
A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she coauthored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.
More detail is provided by an open letter authored by her supporters within Google and elsewhere. “Instead of being embraced by Google as an exceptionally talented and prolific contributor”, it says,
Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing. In an email to Dr. Gebru’s team on the evening of December 2, 2020, Google executives claimed that she had chosen to resign. This is false. In their direct correspondence with Dr. Gebru, these executives informed her that her termination was immediate, and pointed to an email she sent to a Google Brain diversity and inclusion mailing list as pretext.
In that email, it seems that Gebru pushed back against Google’s censorship of her (and her colleagues’) research, which focused on examining the environmental and ethical implications of large-scale AI language models (LLMs), which are used in many Google products. Gebru and her team worked for months on a paper that was under review at an academic conference. In late November, five weeks after the article had been internally reviewed and approved for publication through standard processes, senior Google executives made the decision to censor it, without warning or cause.
Gebru asked them to explain this decision and to take accountability for it, and to that responsibility for their “lacklustre” stand on discrimination and harassment in the workplace. Her supporters see her ‘termination’ as “an act of retaliation against Dr. Gebru, and it heralds danger for people working for ethical and just AI — especially Black people and People of Color — across Google.”
As an outsider it’s difficult to know what to make of this. MIT’s Technology Review obtained a copy of the article at the root of the matter — it has the glorious title of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” — from one of its co-authors, Emily Bender, a professor of computational linguistics at the University of Washington. However, she asked the magazine not to publish it in full because it was an early draft.
Despite this pre-condition, Tech Review is able to provide a pretty informative overview of the paper. On the basis of this summary, it’s hard to figure out what would lead senior Google executives to pull the plug on its publication.
Its aim, says Bender, was to survey the current landscape of research in natural language processing (NLP).
First of all, it takes a critical look at the environmental and financial costs of this kind of machine-leading research. It finds that the carbon footprint of the research has been ‘exploding’ since 2017 as models are fed more and more data from which to learn. This is interesting and important (I’ve even written about it myself) but there’s nothing special about the paper’s conclusions, except perhaps the implication that the costs of doing this stuff can only be borne by huge corporations while climate change hits poorer communities disproportionately.
Secondly, the massive linguistic data sets required inevitably contain many varieties of bias. (We knew that.) But they also capture only past language usage and are unable to capture ways in which language is changing as society changes. So, “An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.” Well, yes, but…
And then there are the opportunity costs of prioritising NLP research as against other things with potentially greater societal benefit. “Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them.
This research effort brings with it an opportunity cost, Gebru and her colleagues maintain. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully-curated data sets (and thus also use less energy).
Finally, there’s the risk that that because these new NLP models are so good at mimicking real human language that it’s easy to use them to fool people. There have been a few high-profile cases of this, such as the college student who churned out AI-generated self-help and productivity advice on a blog — which then went viral.
”The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.”
All of this is interesting but — as far as I can see — not exactly new. And yet it seems that, as Professor Bender puts it, “someone at Google decided this was harmful to their interests”.
And my question is: why? Is it just that the paper provides a lot of data which suggests that a core technology now used in many of Google’s products is, well, bad for the world? If that was indeed the motivation for the original dispute and decision, then it suggests that Google’s self-image as a technocratic force for societal good is now too important to be undermined by high-quality research which suggests otherwise. In which case, it suggests that there’s not that much difference between big tech companies and tobacco, oil and mining giants. They’re just corporations, doing what corporations always do.
Another, hopefully interesting, link
One in Six Cadillac Dealers Opt to Close Instead of Selling Electric Cars. When told to get with the times or get out of the way, 150 out of 800 dealers reportedly took a cash buyout and walked away. Link They’ve figured out that there’s much less money in selling EVs which require very little follow-up care and maintenance. Once you’ve sold someone an EV, you won’t see them that often. No more expensive oil-changes and spark-plug changes.
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!
My eye was caught by the title of a working paper published by the National Bureau for Economic Research (NBER): How to Talk When a Machine Is Listening: Corporate Disclosure in the the Age of AI. So I clicked and downloaded, as one does. And then started to read…
Since March, the news — and the medium-term outlook — has been persistently depressing. Rupert Beale’s piece in the London Review of Books is the first essay I’ve read that holds out rational grounds for hope.
The virus’s genetic code has been available since January. We knew precious little about the subtleties of coronaviruses, but we did already know that they rip their way into host cells using a protein complex known as Spike. Block Spike, with a vaccine that raises antibodies to it, and you block the virus. There are plenty of ways to do this. You might use a killed authentic Sars-CoV-2 virus; or a different, live but innocuous virus with Spike bolted on; or the Spike protein plus an adjuvant (something that promotes an aggressive immune response); or the messenger RNA that codes for a piece of Spike, so your own cells make the protein. Two vaccines of this last type have proven blessedly effective. The Pfizer/BioNTech and Moderna vaccines are about 95 per cent likely to prevent symptomatic infection. To put this in context, we are content if the annual and very well understood seasonal influenza vaccine is 60 per cent effective. The deputy chief medical officer, Jonathan Van-Tam, compared it to your team scoring two consecutive goals in a penalty shootout. Penalties are too prosaic; in footballing terms these are goals only Maradona could have scored. But even that falls short of conveying quite how remarkable it is to have created a vaccine, using hitherto unproven technology, that’s 95 per cent effective against a novel virus – in less than a year.
The good news doesn’t stop there…. Beale is a clinician scientist group leader at the Francis Crick Institute, The UK’s leading institute for the relevant science. So his assessment of the potential of the vaccines is worth playing attention to. His article is long and detailed. And here’s the payoff:
The end is in sight. Effectively deployed testing may be able to ameliorate social distancing until the vaccines arrive. We were woefully prepared for a coronavirus pandemic in March, but were another similar virus to emerge in 2022 we wouldn’t make the same mistakes. We should be wary of learning the wrong lessons, however. To have several highly effective vaccines for this horrible virus after less than a year is a quite astonishing achievement, among the greatest things that we – by which I mean both humanity in general and molecular biologists in particular – have ever accomplished. We’ve been skilful, but we have also been lucky. A Sars-CoV-2 vaccine turns out to be relatively easy to develop. The virus that causes the next pandemic may not be so forgiving.
Other, hopefully interesting, Links
24 high-quality Covid illustrations. Free for commercial and personal use.Link
How and when will life go back to normal? Answer from epidemiologists. Link
Faraday Cages for Wi-Fi Routers are the latest 5G conspiracy racket. Why do people believe pseudo-scientific nonsense? Link
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!