Peak Apple? No: just peak smartphone

This morning’s Observer column:

On 2 January, in a letter to investors, Tim Cook revealed that he expected revenues for the final quarter of 2018 to be lower than originally forecast.

Given that most of Apple’s revenues come from its iPhone, this sent the tech commentariat into overdrive – to the point where one level-headed observer had to point out that the sky hadn’t fallen: all that had happened was that Apple shares were down a bit. And all this despite the fact that the other bits of the company’s businesses (especially the watch, AirPods, services and its retail arm) were continuing to do nicely. Calmer analyses showed that the expected fall in revenues could be accounted for by two factors: the slowdown in the Chinese economy (together with some significant innovations by the Chinese internet giant WeChat); and the fact that consumers seem to be hanging on to their iPhones for longer, thereby slowing the steep upgrade path that had propelled Apple to its trillion-dollar valuation.

What was most striking, though, was that the slowdown in iPhone sales should have taken journalists and analysts by surprise…

Read on

Media credulity and AI hype

This morning’s Observer column:

Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.

The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”

Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust…

Read on

The real significance of the Apple slide

Apart from the fact that the Chinese economy seems to be faltering and collateral damage from Trump’s ‘trade war’ what the slide signals is that the smartphone boom triggered by Apple with the iPhone is ending because we’re reaching a plateau and apparently there’s no New New Thing in sight. At any rate, that’s Kara Swisher’s take on it:

The last big innovation explosion — the proliferation of the smartphone — is clearly ending. There is no question that Apple was the center of that, with its app-centric, photo-forward and feature-laden phone that gave everyone the first platform for what was to create so many products and so much wealth. It was the debut of the iPhone in 2007 that spurred what some in tech call a “Cambrian explosion,” a reference to the era when the first complex animals appeared. There would be no Uber and Lyft without the iPhone (and later the Android version), no Tinder, no Spotify.

Now all of tech is seeking the next major platform and area of growth. Will it be virtual and augmented reality, or perhaps self-driving cars? Artificial intelligence, robotics, cryptocurrency or digital health? We are stumbling in the dark.

Yep. Situation normal, in other words.

How companies are addressing machine learning

From an O’Reilly newsletter:

In a recent O’Reilly survey, we found that the skills gap remains one of the key challenges holding back the adoption of machine learning. The demand for data skills (“the sexiest job of the 21st century”) hasn’t dissipated—LinkedIn recently found that demand for data scientists in the US is “off the charts,” and our survey indicated that the demand for data scientists and data engineers is strong not just in the US but globally.

With the average shelf life of a skill today at less than five years and the cost to replace an employee estimated at between six and nine months of the position’s salary, there’s increasing pressure on tech leaders to retain and upskill rather than replace their employees in order to keep data projects (such as machine learning implementations) on track. We’re also seeing more training programs aimed at executives and decision makers, who need to understand how these new ML technologies can impact their current operations and products.

Beyond investments in narrowing the skills gap, companies are beginning to put processes in place for their data science projects, for example creating analytics centers of excellence that centralize capabilities and share best practices. Some companies are also actively maintaining a portfolio of use cases and opportunities for ML.

Note the average shelf-life of a skill and then ponder why the UK government is not boosting the Open University.

What the Internet tells us about human nature

This morning’s Observer column:

When the internet first entered public consciousness in the early 1990s, a prominent media entrepreneur described it as a “sit up” rather than a “lean back” medium. What she meant was that it was quite different from TV, which encouraged passive consumption by a species of human known universally as the couch potato. The internet, some of us fondly imagined, would be different; it would encourage/enable people to become creative generators of their own content.

Spool forward a couple of decades and we are sadder and wiser. On any given weekday evening in many parts of the world, more than half of the data traffic on the internet is accounted for by video streaming to couch potatoes worldwide. (Except that many of them may not be sitting on couches, but watching on their smartphones in a variety of locations and postures.) The internet has turned into billion-channel TV.

That explains, for example, why Netflix came from nowhere to be such a dominant company. But although it’s a huge player in the video world, Netflix may not be the biggest. That role falls to something that is rarely mentioned in polite company, namely pornography…

Read on

Reflections on AlphaGoZero

Steven Strogatz in the New York Times:

All of that has changed with the rise of machine learning. By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever. Not only could it have easily defeated all the strongest human masters — it didn’t even bother to try — it crushed Stockfish, the reigning computer world champion of chess. In a hundred-game match against a truly formidable engine, AlphaZero scored twenty-eight wins and seventy-two draws. It didn’t lose a single game.

Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks. In some games it paralyzed Stockfish and toyed with it. While conducting its attack in Game 10, AlphaZero retreated its queen back into the corner of the board on its own side, far from Stockfish’s king, not normally where an attacking queen should be placed.

Yet this peculiar retreat was venomous: No matter how Stockfish replied, it was doomed. It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador. Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.

Hmmm… It’s important to remember that board games are a very narrow domain. In a way it’s not surprising that machines are good at playing them. But it’s undeniable that AlphaGoZero is remarkable.

The dream of augmentation

This morning’s Observer column:

Engelbart was a visionary who believed that the most effective way to solve problems was to augment human abilities and develop ways of building collective intelligence. Computers, in his view, were “power steering for the mind” – tools for augmenting human capabilities – and this idea of augmentation has been the backbone of the optimistic narrative of the tech industry ever since.

The dream has become a bit tarnished in the last few years, as we’ve learned how data vampires use the technology to exploit us at the same time as they provide free tools for our supposed “augmentation”…

Read on

Conspiracy theories, the Internet and democracy

My OpEd piece from yesterday’s Observer:

Conspiracy theories have generally had a bad press. They conjure up images of eccentrics in tinfoil hats who believe that aliens have landed and the government is hushing up the news. And maybe it’s statistically true that most conspiracy theories belong on the harmless fringe of the credibility spectrum.

On the other hand, the historical record contains some conspiracy theories that have had profound effects. Take the “stab in the back” myth, widely believed in Germany after 1918, which held that the German army did not lose the First World War on the battlefield but was betrayed by civilians on the home front. When the Nazis came to power in 1933 the theory was incorporated in their revisionist narrative of the 1920s: the Weimar Republic was the creation of the “November criminals” who stabbed the nation in the back to seize power while betraying it. So a conspiracy theory became the inspiration for the political changes that led to a second global conflict.

More recent examples relate to the alleged dangers of the MMR jab and other vaccinations and the various conspiracy theories fuelling denial of climate change.

For the last five years, my academic colleagues – historian Richard Evans and politics professor David Runciman – and I have been leading a team of researchers studying the history, nature and significance of conspiracy theories with a particular emphasis on their implications for democracy…

Read on

We already know what it’s like to live under Artificial Intelligences

This morning’s Observer column:

In 1965, the mathematician I J “Jack” Good, one of Alan Turing’s code-breaking colleagues during the second world war, started to think about the implications of what he called an “ultra-intelligent” machine – ie “a machine that can surpass all the intellectual activities of any man, however clever”. If we were able to create such a machine, he mused, it would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.

Note the proviso. Good’s speculation has lingered long in our collective subconscious, occasionally giving rise to outbreaks of fevered speculation. These generally focus on two questions. How long will it take us to create superintelligent machines? And what will it be like for humans to live with – or under – such machines? Will they rapidly conclude that people are a waste of space? Does the superintelligent machine pose an existential risk for humanity?

The answer to the first question can be summarised as “longer than you think”. And as for the second question, well, nobody really knows. How could they? Surely we’d need to build the machines first and then we’d find out. Actually, that’s not quite right. It just so happens that history has provided us with some useful insights into what it’s like to live with – and under – superintelligent machines.

They’re called corporations, and they’ve been around for a very long time – since about 1600, in fact…

Read on