The Unicorn bubble and its aftermath

This morning’s Observer column:

Some unicorns have astonishing valuations, which are based on the price that new investors are willing to pay for a share. Uber, for example, currently has a valuation in the region of $80bn (£61bn) and there is feverish speculation that when it eventually goes for an initial public offering (IPO) it could be valued at $120bn (£91bn). This for a company that has never made anywhere near a profit and currently loses money at an eye-watering rate. If this reminds you of the dotcom boom of the late 1990s, then join the club.

There is, however, one significant difference. The dotcom boom was based on clueless and irrational exuberance about the commercial potential of the internet, so when it became clear that startups such as Boo.com and Pets.com were never likely to make a profit, the bubble burst as investors tried to get out. But investors in Uber probably don’t care if it never makes a profit, so long as it gets to an IPO that enables them to cash out with a big payoff. If Uber did go public at a valuation of $120bn, for example, the Saudi royal family alone would have a $16bn (£12bn) payday from their investment.

So what’s going on?

Read on

WhatsApp tries damage limitation

This morning’s Observer column:

In the last two years, around two dozen people in India have been killed by lynch mobs inflamed by rumours on WhatsApp, the encrypted messaging service owned by Facebook. WhatsApp has also been fingered for its role in other hateful or unsavoury episodes in Brazil and Pakistan. In each case, the accusation is essentially the same: disinformation and lies, often of an inflammatory kind, are effortlessly disseminated by WhatsApp and obviously believed by some of the recipients, who are thereby encouraged to do terrible things.

In terms of software architecture and interface design, WhatsApp is a lovely system, which is why it is a favourite of families, not to mention Westminster plotters, who are allegedly addicted to it. Its USP is that messages on the platform are encrypted end to end, which means that not even Facebook, the app’s owner, can read them. This is either a feature or a bug, depending on your point of view. If you’re a user, then it’s a feature because it guarantees that your deathless prose is impenetrable to snoopers; if you’re a spook or a cop, then it’s definitely a bug, because you can’t read the damned messages.

A few years ago, WhatsApp added a key new feature – an easy way to forward a message to multiple chat groups at once…

Read on

Shoshana Zuboff’s new book

Today’s Observer carries a five-page feature about Shoshana Zuboff’s The Age of Surveillance Capitalism consisting of an intro by me followed by Q&A between me and the author.

LATER Nick Carr has a perceptive review of the book in the LA Review of Books. John Thornhill also had a good long review in last Saturday’s Financial Times, sadly behind a paywall.

Peak Apple? No: just peak smartphone

This morning’s Observer column:

On 2 January, in a letter to investors, Tim Cook revealed that he expected revenues for the final quarter of 2018 to be lower than originally forecast.

Given that most of Apple’s revenues come from its iPhone, this sent the tech commentariat into overdrive – to the point where one level-headed observer had to point out that the sky hadn’t fallen: all that had happened was that Apple shares were down a bit. And all this despite the fact that the other bits of the company’s businesses (especially the watch, AirPods, services and its retail arm) were continuing to do nicely. Calmer analyses showed that the expected fall in revenues could be accounted for by two factors: the slowdown in the Chinese economy (together with some significant innovations by the Chinese internet giant WeChat); and the fact that consumers seem to be hanging on to their iPhones for longer, thereby slowing the steep upgrade path that had propelled Apple to its trillion-dollar valuation.

What was most striking, though, was that the slowdown in iPhone sales should have taken journalists and analysts by surprise…

Read on

Media credulity and AI hype

This morning’s Observer column:

Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.

The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”

Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust…

Read on

Why Facebook isn’t viable in its current form

This morning’s Observer column:

Way back in the 1950s, a pioneering British cybernetician, W Ross Ashby, proposed a fundamental law of dynamic systems. In his book An Introduction to Cybernetics, he formulated his law of requisite variety, which defines “the minimum number of states necessary for a controller to control a system of a given number of states”. In plain English, it boils down to this: for a system to be viable, it has to be able to absorb or cope with the complexity of its environment. And there are basically only two ways of achieving viability in those terms: either the system manages to control (or reduce) the variety of its environment, or it has to increase its internal capacity (its “variety”) to match what is being thrown at it from the environment.

Sounds abstruse, I know, but it has a contemporary resonance. Specifically, it provides a way of understanding some of the current internal turmoil in Facebook as it grapples with the problem of keeping unacceptable, hateful or psychotic content off its platform…

Read on

See also Tyler Cowen’s ‘trilemma’ piece

What the Internet tells us about human nature

This morning’s Observer column:

When the internet first entered public consciousness in the early 1990s, a prominent media entrepreneur described it as a “sit up” rather than a “lean back” medium. What she meant was that it was quite different from TV, which encouraged passive consumption by a species of human known universally as the couch potato. The internet, some of us fondly imagined, would be different; it would encourage/enable people to become creative generators of their own content.

Spool forward a couple of decades and we are sadder and wiser. On any given weekday evening in many parts of the world, more than half of the data traffic on the internet is accounted for by video streaming to couch potatoes worldwide. (Except that many of them may not be sitting on couches, but watching on their smartphones in a variety of locations and postures.) The internet has turned into billion-channel TV.

That explains, for example, why Netflix came from nowhere to be such a dominant company. But although it’s a huge player in the video world, Netflix may not be the biggest. That role falls to something that is rarely mentioned in polite company, namely pornography…

Read on

An existential threat to liberal democracy?

This morning’s Observer column:

At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.

We know this because the US Senate select committee on intelligence commissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.
Sign up to our Brexit weekly briefing
Read more

Last week the committee released both reports. They make for sobering reading…

Read on

The dream of augmentation

This morning’s Observer column:

Engelbart was a visionary who believed that the most effective way to solve problems was to augment human abilities and develop ways of building collective intelligence. Computers, in his view, were “power steering for the mind” – tools for augmenting human capabilities – and this idea of augmentation has been the backbone of the optimistic narrative of the tech industry ever since.

The dream has become a bit tarnished in the last few years, as we’ve learned how data vampires use the technology to exploit us at the same time as they provide free tools for our supposed “augmentation”…

Read on

Tech companies’ greatest asset

This morning’s Observer column:

Arthur C Clarke’s adage that “any sufficiently advanced technology is indistinguishable from magic” may or may not be true, but what is definitely true is that computer software has magical properties. It’s pure “thought stuff”: a programmer has an idea; they encapsulate it as a string of symbols that are then fed into an inanimate machine. And then the machine executes the instructions encoded in the symbols. And it obeys those instructions faithfully, unquestioningly and without tiring. Which is why being a programmer is a bit like being Napoleon – except that, unlike Bonaparte, the programmer doesn’t have to worry about victualling the troops.

As with any other line of work, there’s a spectrum of ability in programming that runs from barely competent to genius. At the top end are people who are not just 10 or 20 times better than the average, but a million times smarter. So to call them programmers is like calling Christian Dior a dressmaker…

Read on