Shoshana Zuboff’s new book

Today’s Observer carries a five-page feature about Shoshana Zuboff’s The Age of Surveillance Capitalism consisting of an intro by me followed by Q&A between me and the author.

LATER Nick Carr has a perceptive review of the book in the LA Review of Books. John Thornhill also had a good long review in last Saturday’s Financial Times, sadly behind a paywall.

Peak Apple? No: just peak smartphone

This morning’s Observer column:

On 2 January, in a letter to investors, Tim Cook revealed that he expected revenues for the final quarter of 2018 to be lower than originally forecast.

Given that most of Apple’s revenues come from its iPhone, this sent the tech commentariat into overdrive – to the point where one level-headed observer had to point out that the sky hadn’t fallen: all that had happened was that Apple shares were down a bit. And all this despite the fact that the other bits of the company’s businesses (especially the watch, AirPods, services and its retail arm) were continuing to do nicely. Calmer analyses showed that the expected fall in revenues could be accounted for by two factors: the slowdown in the Chinese economy (together with some significant innovations by the Chinese internet giant WeChat); and the fact that consumers seem to be hanging on to their iPhones for longer, thereby slowing the steep upgrade path that had propelled Apple to its trillion-dollar valuation.

What was most striking, though, was that the slowdown in iPhone sales should have taken journalists and analysts by surprise…

Read on

Media credulity and AI hype

This morning’s Observer column:

Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.

The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”

Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust…

Read on

Why Facebook isn’t viable in its current form

This morning’s Observer column:

Way back in the 1950s, a pioneering British cybernetician, W Ross Ashby, proposed a fundamental law of dynamic systems. In his book An Introduction to Cybernetics, he formulated his law of requisite variety, which defines “the minimum number of states necessary for a controller to control a system of a given number of states”. In plain English, it boils down to this: for a system to be viable, it has to be able to absorb or cope with the complexity of its environment. And there are basically only two ways of achieving viability in those terms: either the system manages to control (or reduce) the variety of its environment, or it has to increase its internal capacity (its “variety”) to match what is being thrown at it from the environment.

Sounds abstruse, I know, but it has a contemporary resonance. Specifically, it provides a way of understanding some of the current internal turmoil in Facebook as it grapples with the problem of keeping unacceptable, hateful or psychotic content off its platform…

Read on

See also Tyler Cowen’s ‘trilemma’ piece

What the Internet tells us about human nature

This morning’s Observer column:

When the internet first entered public consciousness in the early 1990s, a prominent media entrepreneur described it as a “sit up” rather than a “lean back” medium. What she meant was that it was quite different from TV, which encouraged passive consumption by a species of human known universally as the couch potato. The internet, some of us fondly imagined, would be different; it would encourage/enable people to become creative generators of their own content.

Spool forward a couple of decades and we are sadder and wiser. On any given weekday evening in many parts of the world, more than half of the data traffic on the internet is accounted for by video streaming to couch potatoes worldwide. (Except that many of them may not be sitting on couches, but watching on their smartphones in a variety of locations and postures.) The internet has turned into billion-channel TV.

That explains, for example, why Netflix came from nowhere to be such a dominant company. But although it’s a huge player in the video world, Netflix may not be the biggest. That role falls to something that is rarely mentioned in polite company, namely pornography…

Read on

An existential threat to liberal democracy?

This morning’s Observer column:

At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.

We know this because the US Senate select committee on intelligence commissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.
Sign up to our Brexit weekly briefing
Read more

Last week the committee released both reports. They make for sobering reading…

Read on

The dream of augmentation

This morning’s Observer column:

Engelbart was a visionary who believed that the most effective way to solve problems was to augment human abilities and develop ways of building collective intelligence. Computers, in his view, were “power steering for the mind” – tools for augmenting human capabilities – and this idea of augmentation has been the backbone of the optimistic narrative of the tech industry ever since.

The dream has become a bit tarnished in the last few years, as we’ve learned how data vampires use the technology to exploit us at the same time as they provide free tools for our supposed “augmentation”…

Read on

Tech companies’ greatest asset

This morning’s Observer column:

Arthur C Clarke’s adage that “any sufficiently advanced technology is indistinguishable from magic” may or may not be true, but what is definitely true is that computer software has magical properties. It’s pure “thought stuff”: a programmer has an idea; they encapsulate it as a string of symbols that are then fed into an inanimate machine. And then the machine executes the instructions encoded in the symbols. And it obeys those instructions faithfully, unquestioningly and without tiring. Which is why being a programmer is a bit like being Napoleon – except that, unlike Bonaparte, the programmer doesn’t have to worry about victualling the troops.

As with any other line of work, there’s a spectrum of ability in programming that runs from barely competent to genius. At the top end are people who are not just 10 or 20 times better than the average, but a million times smarter. So to call them programmers is like calling Christian Dior a dressmaker…

Read on

Apple, the App Store and monopoly

This morning’s Observer column:

Because Apple has always specialised in control freakery and doesn’t allow anybody else to use its iOS platform without prior approval, the App Store was from the beginning owned and controlled by Apple. If you wanted to create an app for the iPhone (and later the iPad), it had to be approved by Apple and sold on the App Store. And if a developer wanted to charge for the app, then Apple took a 30% cut on the price.

So, in relation to the App Store, Apple is definitely a monopolist. The question underlying the supreme court hearing was: is it an abusive monopolist? And if so, are customers of the App Store entitled to damages? Does the operation of the store give rise to consumer harm and thereby trigger redress under US antitrust law?

The case goes back to 2011…

Read on

We already know what it’s like to live under Artificial Intelligences

This morning’s Observer column:

In 1965, the mathematician I J “Jack” Good, one of Alan Turing’s code-breaking colleagues during the second world war, started to think about the implications of what he called an “ultra-intelligent” machine – ie “a machine that can surpass all the intellectual activities of any man, however clever”. If we were able to create such a machine, he mused, it would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.

Note the proviso. Good’s speculation has lingered long in our collective subconscious, occasionally giving rise to outbreaks of fevered speculation. These generally focus on two questions. How long will it take us to create superintelligent machines? And what will it be like for humans to live with – or under – such machines? Will they rapidly conclude that people are a waste of space? Does the superintelligent machine pose an existential risk for humanity?

The answer to the first question can be summarised as “longer than you think”. And as for the second question, well, nobody really knows. How could they? Surely we’d need to build the machines first and then we’d find out. Actually, that’s not quite right. It just so happens that history has provided us with some useful insights into what it’s like to live with – and under – superintelligent machines.

They’re called corporations, and they’ve been around for a very long time – since about 1600, in fact…

Read on