Why, sooner or later, societies are going to have to rein in the tech giants

My OpEd piece in yesterday’s Observer:

Spool forward to the tragic case of Molly Russell, the 14-year-old who killed herself after exploring her depression on Instagram. When her family looked into her account, they found sombre material about depression and suicide. Her father said that he believed the Facebook-owned platform had “helped kill my daughter”. This prompted Matt Hancock, the health secretary, to warn social media platforms to “purge” material relating to self-harm and suicide or face legislation that would compel them to do so. In response, Instagram and Pinterest (another social media outfit) issued the standard bromides about how they were embarking on a “full review” of their policies etc.

So is Molly’s case a crisis or a scandal? You know the answer. Nothing much will change because the business models of the platforms preclude it. Their commercial imperatives are remorselessly to increase both the number of their users and the intensity of those users’ “engagement” with the platforms. That’s what keeps the monetisable data flowing. Tragedies such as Molly Russell’s suicide are regrettable (and of course have PR downsides) but are really just the cost of running such a profitable business.

Asking these companies to change their business model, therefore, is akin to “asking a giraffe to shorten its neck”, as Shoshana Zuboff puts it in her fiery new book, The Age of Surveillance Capitalism…

Read on

How the technical is political

This morning’s Observer column:

The only computer game I’ve ever played involved no killing, zombies, heavily-armed monsters or quests for hidden keys. It was called SimCity and involved developing a virtual city from a patch of undeveloped land. The game enabled you to determine where to place development zones, infrastructure (like roads and power plants), landmarks and public services such as schools, parks, hospitals and fire stations. You could decide the tax rate, budget and social policy for your city – populated by Sims (for “simulated persons”, I guess) who had to live and work in the three zones you created for them: residential had houses and apartment buildings, commercial had shops and offices and industrial had factories, warehouses, laboratories and (oddly) farms.

SimCity was the brainchild of Will Wright, a software developer who had first made a splash with a shoot-’em-up (well, bomb-’em-flat) video game in which the player controls a helicopter dropping bombs on islands. But he became more fascinated with the islands than with the weaponry and started to wonder what a virtual city would be like – and how it would work. What he came up with was magical for its time: it gave the player a feeling of omnipotence: you decided where Sims should live, whether their electricity should come from nukes, where schools and offices should be located, how much tax they paid…

What you discovered early on, though, was that your decisions had consequences…

Read on

The Unicorn bubble and its aftermath

This morning’s Observer column:

Some unicorns have astonishing valuations, which are based on the price that new investors are willing to pay for a share. Uber, for example, currently has a valuation in the region of $80bn (£61bn) and there is feverish speculation that when it eventually goes for an initial public offering (IPO) it could be valued at $120bn (£91bn). This for a company that has never made anywhere near a profit and currently loses money at an eye-watering rate. If this reminds you of the dotcom boom of the late 1990s, then join the club.

There is, however, one significant difference. The dotcom boom was based on clueless and irrational exuberance about the commercial potential of the internet, so when it became clear that startups such as Boo.com and Pets.com were never likely to make a profit, the bubble burst as investors tried to get out. But investors in Uber probably don’t care if it never makes a profit, so long as it gets to an IPO that enables them to cash out with a big payoff. If Uber did go public at a valuation of $120bn, for example, the Saudi royal family alone would have a $16bn (£12bn) payday from their investment.

So what’s going on?

Read on

WhatsApp tries damage limitation

This morning’s Observer column:

In the last two years, around two dozen people in India have been killed by lynch mobs inflamed by rumours on WhatsApp, the encrypted messaging service owned by Facebook. WhatsApp has also been fingered for its role in other hateful or unsavoury episodes in Brazil and Pakistan. In each case, the accusation is essentially the same: disinformation and lies, often of an inflammatory kind, are effortlessly disseminated by WhatsApp and obviously believed by some of the recipients, who are thereby encouraged to do terrible things.

In terms of software architecture and interface design, WhatsApp is a lovely system, which is why it is a favourite of families, not to mention Westminster plotters, who are allegedly addicted to it. Its USP is that messages on the platform are encrypted end to end, which means that not even Facebook, the app’s owner, can read them. This is either a feature or a bug, depending on your point of view. If you’re a user, then it’s a feature because it guarantees that your deathless prose is impenetrable to snoopers; if you’re a spook or a cop, then it’s definitely a bug, because you can’t read the damned messages.

A few years ago, WhatsApp added a key new feature – an easy way to forward a message to multiple chat groups at once…

Read on

Shoshana Zuboff’s new book

Today’s Observer carries a five-page feature about Shoshana Zuboff’s The Age of Surveillance Capitalism consisting of an intro by me followed by Q&A between me and the author.

LATER Nick Carr has a perceptive review of the book in the LA Review of Books. John Thornhill also had a good long review in last Saturday’s Financial Times, sadly behind a paywall.

Peak Apple? No: just peak smartphone

This morning’s Observer column:

On 2 January, in a letter to investors, Tim Cook revealed that he expected revenues for the final quarter of 2018 to be lower than originally forecast.

Given that most of Apple’s revenues come from its iPhone, this sent the tech commentariat into overdrive – to the point where one level-headed observer had to point out that the sky hadn’t fallen: all that had happened was that Apple shares were down a bit. And all this despite the fact that the other bits of the company’s businesses (especially the watch, AirPods, services and its retail arm) were continuing to do nicely. Calmer analyses showed that the expected fall in revenues could be accounted for by two factors: the slowdown in the Chinese economy (together with some significant innovations by the Chinese internet giant WeChat); and the fact that consumers seem to be hanging on to their iPhones for longer, thereby slowing the steep upgrade path that had propelled Apple to its trillion-dollar valuation.

What was most striking, though, was that the slowdown in iPhone sales should have taken journalists and analysts by surprise…

Read on

Media credulity and AI hype

This morning’s Observer column:

Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.

The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”

Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust…

Read on

Why Facebook isn’t viable in its current form

This morning’s Observer column:

Way back in the 1950s, a pioneering British cybernetician, W Ross Ashby, proposed a fundamental law of dynamic systems. In his book An Introduction to Cybernetics, he formulated his law of requisite variety, which defines “the minimum number of states necessary for a controller to control a system of a given number of states”. In plain English, it boils down to this: for a system to be viable, it has to be able to absorb or cope with the complexity of its environment. And there are basically only two ways of achieving viability in those terms: either the system manages to control (or reduce) the variety of its environment, or it has to increase its internal capacity (its “variety”) to match what is being thrown at it from the environment.

Sounds abstruse, I know, but it has a contemporary resonance. Specifically, it provides a way of understanding some of the current internal turmoil in Facebook as it grapples with the problem of keeping unacceptable, hateful or psychotic content off its platform…

Read on

See also Tyler Cowen’s ‘trilemma’ piece

What the Internet tells us about human nature

This morning’s Observer column:

When the internet first entered public consciousness in the early 1990s, a prominent media entrepreneur described it as a “sit up” rather than a “lean back” medium. What she meant was that it was quite different from TV, which encouraged passive consumption by a species of human known universally as the couch potato. The internet, some of us fondly imagined, would be different; it would encourage/enable people to become creative generators of their own content.

Spool forward a couple of decades and we are sadder and wiser. On any given weekday evening in many parts of the world, more than half of the data traffic on the internet is accounted for by video streaming to couch potatoes worldwide. (Except that many of them may not be sitting on couches, but watching on their smartphones in a variety of locations and postures.) The internet has turned into billion-channel TV.

That explains, for example, why Netflix came from nowhere to be such a dominant company. But although it’s a huge player in the video world, Netflix may not be the biggest. That role falls to something that is rarely mentioned in polite company, namely pornography…

Read on

An existential threat to liberal democracy?

This morning’s Observer column:

At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.

We know this because the US Senate select committee on intelligence commissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.
Sign up to our Brexit weekly briefing
Read more

Last week the committee released both reports. They make for sobering reading…

Read on