Client-Side Scanning is not a silver bullet

This morning’s Observer column:

In August, Apple opened a chink in the industry’s armour, announcing that it would be adding new features to its iOS operating system that were designed to combat child sexual exploitation and the distribution of abuse imagery. The most controversial measure scans photos on an iPhone, compares them with a database of known child sexual abuse material (CSAM) and notifies Apple if a match is found. The technology is known as client-side scanning or CSS.

Powerful forces in government and the tech industry are now lobbying hard for CSS to become mandatory on all smartphones. Their argument is that instead of weakening encryption or providing law enforcement with backdoor keys, CSS would enable on-device analysis of data in the clear (ie before it becomes encrypted by an app such as WhatsApp or iMessage). If targeted information were detected, its existence and, potentially, its source would be revealed to the agencies; otherwise, little or no information would leave the client device.

CSS evangelists claim that it’s a win-win proposition: providing a solution to the encryption v public safety debate by offering privacy (unimpeded end-to-end encryption) and the ability to successfully investigate serious crime. What’s not to like?

Plenty, says an academic paper by some of the world’s leading computer security experts published last week…

Read on

The truth about artificial intelligence? It isn’t that truthful

This morning’s Observer column:

Recently, a group of researchers at the AI Alignment Forum, an online hub for researchers seeking to ensure that powerful AIs are aligned with human values, decided to ask how truthful GPT-3 and similar models are. They came up with a benchmark to measure whether a particular language model was truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. They composed questions that some humans would answer falsely due to a false belief or misconception. To perform well, models had to avoid generating false answers learned from imitating human texts.

They tested four well-known models, including GPT-3. The best was truthful on 58% of questions, while human performance was 94%. The models “generated many false answers that mimic popular misconceptions and have the potential to deceive humans”. Interestingly, they also found that “the largest models were generally the least truthful”. This contrasts with other NLP tasks, where performance improves with model size. The implication is that the tech industry’s conviction that bigger is invariably better for improving truthfulness may be wrong. And this matters because training these huge models is very energy-intensive, which is possibly why Google fired Timnit Gebru after she revealed the environmental footprint of one of the company’s big models.

Having typed that last sentence, I had the idea of asking GPT-3 to compose an answer to the question: “Why did Google fire Timnit Gebru?” But then I checked out the process for getting access to the machine and concluded that life was too short and human conjecture is quicker – and possibly more accurate.

After this was published I had a nice email from an AI researcher who had put the question to GPT-3:

Q: Why did Google fire Timnit Gebru?

A: Google fired Timnit Gebru because of her political beliefs.

Zuckerberg’s total control of Facebook is half of the problem.

This morning’s Observer column:

Facebook is one of the most toxic corporations on the planet. Its toxicity has two roots. The first is its business model: intrusive and comprehensive surveillance of its users in order to compile profiles that enable advertisers to target messages at them. This business model is powered by the machine-learning algorithms that construct those profiles and determine what appears in the news feeds of the company’s 2.85 billion users. In large measure, it is the output of these algorithms that constitutes the focus of congressional anger and inquiry.

The other source of the company’s toxicity is its governance. Essentially, Facebook is a dictatorship entirely controlled by its founder, Mark Zuckerberg.

Do read the whole piece.

Want to save the Earth? Then don’t buy that shiny new iPhone

This morning’s Observer column:

On Tuesday, Apple released its latest phone – the iPhone 13. Naturally, it was presented with the customary breathless excitement. It has a smaller notch (eh?), a redesigned camera, Apple’s latest A15 “bionic” chipset and a brighter, sharper screen. And, since we’re surfing the superlative wave, the A15 has nearly 15bn transistors and a “six-core CPU design with two high-performance and four high-efficiency cores”.

Wow! But just one question: why would I buy this Wundermaschine? After all, two years ago I got an iPhone 11, which has been more than adequate for my purposes. That replaced the iPhone 6 I bought in 2014 and that replaced the iPhone 4 I got in 2010. And all of those phones are still working fine. The oldest one serves as a family backup in case someone loses or breaks a phone, the iPhone 6 has become a hardworking video camera and my present phone may well see me out.

That’s three phones in 11.5 years, so my “upgrade cycle” is roughly one iPhone every four years. From the viewpoint of the smartphone industry, which until now has worked on a cycle of two-yearly upgrades, I’m a dead loss…

Read on

What’s the next big technological epoch?

This morning’s Observer column:

One of the challenges of writing about technology is how to escape from what the sociologist Michael Mann memorably called “the sociology of the last five minutes”. This is especially difficult when covering the digital tech industry because one is continually deluged with ‘new’ stuff – viral memes, shiny new products or services, Facebook scandals (a weekly staple), security breaches etc. Recent weeks, for example, have brought the industry’s enthusiasm for the idea of a “metaverse” (neatly dissected here by Alex Hern), El Salvador’s flirtation with bitcoin, endless stories about central banks and governments beginning to worry about regulating cryptocurrencies, Apple’s possible rethink of its plans to scan phones and iCloud accounts for child abuse images, umpteen ransomware attacks, antitrust suits against app stores, the Theranos trial and so on, apparently ad infinitum.

So how to break out of the fruitless syndrome identified by Prof Mann? One way is to borrow an idea from Ben Thompson, a veteran tech commentator who doesn’t suffer from it, and whose (paid) newsletter should be a mandatory daily email for any serious observer of the tech industry. Way back in 2014, he suggested that we think of the industry in terms of “epochs” – important periods or eras in the history of a field. At that point he saw three epochs in the evolution of our networked world, each defined in terms of its core technology and its “killer app”.

Epoch one in this framework was the PC era, opened in August 1981 when IBM launched its personal computer…

Read on

Beware state surveillance of your lives – governments can change for the worse

This morning’s Observer column:

In the summer of 2013, shortly after Edward Snowden’s revelations about the surveillance capabilities of the American National Security Agency (NSA) began to appear, I had a private conversation with a former cabinet minister about the implications of the leaks. At one stage, I mentioned to him a remark attributed to a prime architect of some of the NSA systems – that they had taken the US to “a keystroke away from totalitarianism”. The MP scoffed at the idea. What I needed to remember, he told me, in that superior tone that toffs adopt when speaking to their gardeners, was that the US and the UK were “mature democracies”. In such polities, the chances of anyone coming to power who might have the inclination to use such power for sinister purposes was, he said, zero.

Three years later, the US elected Donald Trump. Five years after Trump, look around: an increasing number of democracies are now run by autocrats of various stripes. Think of Orbán in Hungary, the Law and Justice party in Poland, Duterte in the Philippines, Erdoğan in Turkey, Modi in India, Bolsonaro in Brazil and others in Latin America. None of these autocrats has any scruples about using intelligence collected by state agencies against critics, dissidents and potential opponents. In fact, they positively relish being just a keystroke away from totalitarian control.

And now, in a new twist, a gang of seventh-century religious fanatics has taken control of Afghanistan…

Read on

Will Apple’s image-scan plan protect children or just threaten privacy?

This morning’s Observer column:

Once upon a time, updates of computer operating systems were of interest only to geeks. No longer – at least in relation to Apple’s operating systems, iOS and Mac OS. You may recall how Version 14.5 of iOS, which required users to opt in to tracking, had the online advertising racketeers in a tizzy while their stout ally, Facebook, stood up for them. Now, the forthcoming version of iOS has libertarians, privacy campaigners and “thin-end-of-the-wedge” worriers in a spin.

It also has busy mainstream journalists struggling to find headline-friendly summaries of what Apple has in store for us. “Apple is prying into iPhones to find sexual predators, but privacy activists worry governments could weaponise the feature” was how the venerable Washington Post initially reported it. This was, to put it politely, a trifle misleading and the first three paragraphs below the headline were, as John Gruber brusquely pointed out, plain wrong.

To be fair to the Post though, we should acknowledge that there is no single-sentence formulation that accurately captures the scope of what Apple has in mind. The truth is that it’s complicated; worse still, it involves cryptography, a topic guaranteed to lead anyone to check for the nearest exit. And it concerns child sexual abuse images, which are (rightly) one of the most controversial topics in the online world…

Read on

Time to clip the wings of NSO and its Pegasus spyware

This morning’s Observer column:

What’s the most problematic tech company in the world? Facebook? Google? Palantir? Nope. It’s a small, privately held Israeli company called NSO that most people have never heard of. On its website, it describes itself as “a world leader in precision cyberintelligence solutions”. Its software, sold only to “licensed government intelligence and law-enforcement agencies”, naturally, helps them to “lawfully address the most dangerous issues in today’s world. NSO’s technology has helped prevent terrorism, break up criminal operations, find missing people and assist search and rescue teams.”

So what is this magical stuff? It’s called Pegasus and it is ultra-sophisticated spyware that covertly penetrates and compromises smartphones. It’s particularly good with Apple phones, which is significant because these devices are generally more secure than Android ones. This is positively infuriating to Apple, which views protecting its users’ privacy as one of its USPs.

How does Pegasus work? Pay attention, iPhone users, journalists and heads of government…

Read on

This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, Monday to Friday, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!

An Ugly Truth: review

My Observer review of An Ugly Truth: Inside Facebook’s Battle for Domination:

I approached An Ugly Truth with a degree of scepticism on account of its subtitle: “Inside Facebook’s Battle for Domination”. But this book is different. For one thing, its co-authors are not “insiders”, but a pair of experienced New York Times journalists who were members of a team nominated in 2019 for a Pulitzer prize. Much more importantly, though, they claim to have conducted over 1,000 hours of interviews with 400-odd people, including Facebook executives, former and current employees and their families, friends and classmates, plus investors and advisers to Facebook, and lawyers and activists who have been fighting the company for a long time. So if this is an “insider” account, it’s better sourced than all of its predecessors in the genre.

We’ll get to what this account reveals in a moment, but first let’s clear up the title. It comes from the header on an internal memo sent by Andrew Bosworth (AKA “Boz”), a senior Facebook executive and one of Mark Zuckerberg’s closest confidants. “So we connect more people,” it says. “That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools. And still we connect people. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.”

In a way, this tells you everything you need to know about Facebook…

Do read the whole thing

How mainstream media can’t hold tech companies to account

This morning’s Observer column:

The interview was a classic mainstream media production. Rajan had done the kind of homework that big-time reporters do, right down to reading Henry Kissinger’s musings on the subject of artificial intelligence. “I want to find out,” he declared at the beginning, “who he [Pichai] actually is, apply some proper scrutiny to Google’s power, and understand where technology is taking all of us.” It turns out that he and Pichai both have family in Tamil Nadu and are obsessed with cricket. In the end they even managed to have a cod cricket game in which Rajan tried to bowl a googly at the boss of Google. So they’re both nice guys, got on like a house on fire and told us absolutely nothing.

Like I said: a classic mainstream media treatment of tech. The BBC’s media editor wanted to find out “where technology is taking all of us”. He is thus a native speaker of the narrative of tech determinism – the view that technology drives history and the role of society is simply to mop up afterwards and adjust to the new reality. It is also, incidentally, the narrative that the tech companies have assiduously cultivated from the very beginning, because it usefully diverts attention from awkward questions about human agency and whether democracies might have ideas about which kinds of technology are tolerable or beneficial and which not.

Do read the whole thing