One funeral at a time

This morning’s Observer column:

Science advances, said the great German physicist Max Planck, “one funeral at a time”. Actually, this is a paraphrase of what he really said, which was: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” But you get the drift.

I always think of Planck’s aphorism whenever moral panic breaks out over the supposedly dizzying pace of technological change…

Read on

Implications of AlphaGo’s victory

Many and varied, I guess, and there will be lots of fevered speculation. But I liked this summary by Quartz’s Gideon Lichfield:

“It’s not a human move.”

What shocked the grandmasters watching Lee Sedol, one of the world’s top Go players, lose to a computer on Thursday was not that the computer won, but how it won. A pivotal move by AlphaGo, a project of Google AI subsidiary DeepMind, was so unexpected, so at odds with 2,500 years of Go history and wisdom, that some thought it must be a glitch.

Lee’s third game against AlphaGo is this Saturday. Even if man recovers to beat the machine, what we will remember is that moment of bewilderment. Go is much more complex than chess; to play it, as DeepMind’s CEO explained, AlphaGo needs the computer equivalent of intuition. And as Sedol discovered, that intuition is not of the human kind.

A classic fear about AI is that the machines we build to serve us will destroy us instead, not because they become sentient and malicious, but because they devise unforeseen and catastrophic ways to reach the goals we set them. Worse, if they do become sentient and malicious, then—like Ava, the android in the movie Ex Machina—we may not even realize until it’s too late, because the way they think will be unrecognizable to us. What we call common sense and logic will be revealed as small-minded prejudices, baked in by aeons of biological and social evolution, which trap us in a tiny corner of the possible intellectual universe.

But there is a rosier view: that the machines, sentient or not, could help us break our intellectual bonds and see solutions—whether to Go, or to bigger problems—that we couldn’t imagine otherwise. “So beautiful,” as one grandmaster said of AlphaGo’s game. “So beautiful.”

How AlphaGo works

Very good explainer from the Economist:

AlphaGo uses some of the same technologies as those older programs. But its big idea is to combine them with new approaches that try to get the computer to develop its own intuition about how to play—to discover for itself the rules that human players understand but cannot explain. It does that using a technique called deep learning, which lets computers work out, by repeatedly applying complicated statistics, how to extract general rules from masses of noisy data.

Deep learning requires two things: plenty of processing grunt and plenty of data to learn from. DeepMind trained its machine on a sample of 30m Go positions culled from online servers where amateurs and professionals gather to play. And by having AlphaGo play against another, slightly tweaked version of itself, more training data can be generated quickly.

Those data are fed into two deep-learning algorithms. One, called the policy network, is trained to imitate human play. After watching millions of games, it has learned to extract features, principles and rules of thumb. Its job during a game is to look at the board’s state and generate a handful of promising-looking moves for the second algorithm to consider.

This algorithm, called the value network, evaluates how strong a move is. The machine plays out the suggestions of the policy network, making moves and countermoves for the thousands of possible daughter games those suggestions could give rise to. Because Go is so complex, playing all conceivable games through to the end is impossible. Instead, the value network looks at the likely state of the board several moves ahead and compares those states with examples it has seen before. The idea is to find the board state that looks, statistically speaking, most like the sorts of board states that have led to wins in the past. Together, the policy and value networks embody the Go-playing wisdom that human players accumulate over years of practice.

As I write this, the score in the best-of-five games between AlphaGo and Lee Sedol, who is generally reckoned to be the world’s best player, stands at 2-nil in favour of AlphaGo.

LATER AlphaGo won the third match. Game over.

Talking to machines

This morning’s Observer column:

Like many people nowadays, I do not talk on my iPhone as much as talk to it. That’s because it runs a program called Siri (Speech Interpretation and Recognition Interface) that works as an intelligent personal assistant and knowledge navigator. It’s useful, in a way. If I ask it for “weather in London today”, it’ll present an hour-by-hour weather forecast. Tell it to “phone home” and it’ll make a decent effort to find the relevant number. Ask it to “text James” and it will come back with: “What do you want to say to James?” Not exactly Socratic dialogue, but it has its uses.

Ask Siri: “What’s the meaning of life?”, however, and it loses its nerve…

Read on

The Wille E Coyote effect

Benedict Evans is at the huge annual mobile phone gabfest in Barcelona. On his way he wrote a very thoughtful blog post about the world before smartphones, and why Nokia and Blackberry didn’t see their demises coming.

Michael Mace wrote a great piece just at the point of collapse for Blackberry, looking into the problem of lagging indicators. The headline metrics tend to be the last ones to start slowing down, and that tends to happen only when it’s too late. So it can look as though you’re doing fine and that the people who said three years ago that there was a major strategic problem were wrong. You might call this the ‘Wille E Coyote effect’ – you’ve run off the cliff, but you’re not falling, and everything seems fine. But by the time you start falling, it’s too late.

That is, using metrics that point up and to the right to refute a suggestion there is a major strategic problem can be very satisfying, but unless you’re very careful, you could be winning the wrong argument. Switching metaphors, Nokia and Blackberry were skating to where the puck was going to be, and felt nice and fast and in control, while Apple and Google were melting the ice rink and switching the game to water-skiing.

I love that last metaphor.

In a way, it was another example of Clayton Christensen’s ‘innovator’s dilemma’. It’s the companies that are doing just fine that may be most endangered.

It’s a great blog post, worth reading in full. Also reminds us that mobile telephony was much more primitive in the US than it was in Europe (because of the GSM standard over here), and that Steve Jobs and co really hated their ‘feature’ phones as primitive devices. Evans sees something similar happening now with cars. It’s no accident, he thinks, that tech companies (Apple, Google) are working on cars. Techies hate cars in their current crude manifestations, whereas the folks who work in the automobile industry love them. Just as Nokia engineers once loved their hardware.

Whither Twitter?

My comment piece in today’s Observer.

If there’s one thing Wall Street and the tech industry fears, it is the idea that something potentially profitable might peak or reach some kind of equilibrium point. Endless exponential growth is what investors seek. Whereas you or I might think that a company with more than 300 million regular users that pulls in $710m in revenues is doing OK, Wall Street sees it as a potential zombie.

At the root of the dissonance is the fact that Twitter is a public company. At its flotation in November 2013 it was valued at $32bn, a figure largely based on hopes (or fantasies) that it would keep modifying its service to attract mainstream users, that its advertising business would continue to grow at a phenomenal rate and that it would eventually be bigger than Facebook.

It didn’t do all these things, for various reasons, the most important of which is that it wasn’t (and isn’t) a “social networking” service in the Facebook sense. At the heart of the distinction is the fact that, whereas it is easy to give an answer to the question “What is Facebook?”, the answers for Twitter depend on who you ask…

Read on

What happens after Moore’s Law runs out of steam?

This morning’s Observer column:

Fifty years ago, Gordon Moore, the co-founder of the chip manufacturer Intel described a regularity he had observed that would one day make him a household name. What he had noticed was that the number of transistors that could be fitted on a given area of silicon doubled roughly every two years. And since transistor density is correlated with computing power, that meant that computing power doubled every two years. Thus was born Moore’s law.

At the beginning, few outside of the computer industry appreciated the significance of this. Humanity, it turns out, is not good at understanding the power of doubling – until it’s too late. Remember the fable about the emperor and the man who invented chess…

Read on

No matter what you get up to in bed, there’s an app for it. Apparently.

This morning’s Observer column about the obsession with ‘datifying’ our bodies.

There are two kinds of people in the world: those who are obsessed with the datafication of their bodies and those who are not. I belong to the latter category: the only thing that interests me about my heart is that it is still beating. And when it isn’t I shall be past caring. But if the current craze for wearable devices such as fitness trackers is anything to go by, I may soon find myself a member of a despised minority, rather like cigarette smokers, whisky drinkers and followers of David Icke…

Read on

We’re not just in a tech bubble. Silicon Valley is a Reality Distortion Field

For my money, danah boyd is one of the smartest and most perceptive people around. This year she went to Davos, and wrote a stunning essay about what she saw there, and the implications thereof. Well worth reading in full, but here’s a sample:

Walking down the promenade through the center of Davos, it was hard not to notice the role of Silicon Valley in shaping the conversation of the powerful and elite. Not only was everyone attached to their iPhones and Androids, but companies like Salesforce and Palantir and Facebook took over storefronts and invited attendees in for coffee and discussions about Syrian migrants, while camouflaged snipers protected the scene from the roofs of nearby hotels. As new tech held fabulous parties in the newest venues, financial institutions, long the stalwarts of Davos, took over the same staid venues that they always have.

Yet, what I struggled with the most wasn’t the sheer excess of Silicon Valley in showcasing its value but the narrative that underpinned it all. I’m quite used to entrepreneurs talking hype in tech venues, but what happened at Davos was beyond the typical hype, in part because most of the non-tech people couldn’t do a reality check. They could only respond with fear. As a result, unrealistic conversations about artificial intelligence led many non-technical attendees to believe that the biggest threat to national security is humanoid killer robots, or that AI that can do everything humans can is just around the corner, threatening all but the most elite technical jobs. In other words, as I talked to attendees, I kept bumping into a 1970s science fiction narrative.

Yep. The problem is not just that we’re in a tech bubble. It’s that we’re in a Reality Distortion Field which leads those who dominate the tech industry to think that they are the centre of the universe, that Silicon Valley is the Florence of Renaissance 2.0. And — worse still — it’s a RDF that leads powerful and influential non-tech people to believe that maybe they’re right.

Like I said, danah’s piece is unmissable — and wise. Make space for it in your day.