Resources, resources, resources

Here’s an instructive story:

Just three months after launching Facebook Live to all users, Mark Zuckerberg decided to go big, realizing in a February meeting that the company should make Live a top priority. A BuzzFeed story on Live tells us what happens next, quoting Facebook Media’s product lead Fidjij Simo:

“The original Live team was composed of only a dozen or so people. But the vision laid out for the product at that February meeting would require more than 100 engineers to build. ‘The meeting was on a Thursday, and on Monday, [Facebook Media engineering lead Maher] Saba and I were standing in front of 150 engineers,’ said Simo.”

From 12 engineers to 150 in less than a week. That’s the new pace of the media business.

Now here’s another instructive question: how many engineers are working on the iPhone camera?

50?

150?

Nope. At the moment, there are 800 engineers working just on the camera.

Which leads me to wonder how many people work in the R&D divisions of Nikon and Canon?

See what conventional companies are up against?

flickr_stats

At present, the most popular cameras among Flickr users are all iPhone models. By a mile.

Improbable? I don’t think so

There’s an interesting new company in London with the improbable name of, er, Improbable. It’s funded by (inter alia) Andreessen Horowitz and it’s built a distributed operating system (called SpatialOS) that enables the operation of huge systems to be realistically simulated.

This has obvious applications in areas like gaming and VR, but the video shows a really intriguing example which has little to do with either. According to the company’s blog,

a team of two came in from the British government1 to explore our technology. Their goal was to build a realistic simulation of the internet so that they could take a look at its “structure”, or in other words, the vast number of connections between computers and networks that make up the World Wide Web. With the internet under attack from a variety of sources, it’s critical they can see its weak spots, to figure out how to protect it.

According to the blog post, Improbable engineers and the visiting spooks were able to use SpatialOS to build a simulation model of the entire Internet backbone in just three days.

“Not only did we demonstrate a dynamic model of BGP routing at scale, we also produced an interactive visualisation where both ASs and the connections between them can be created or destroyed, observing dynamic routing, cascade failures and new route propagation across the network.”

This could be really useful, because computer simulation is one of the few tools we have for trying to understand the behaviour of very large complex systems. But most of the simulation tools we currently have run out of capacity when the systems are as large and complex as the Net. We needed something more powerful. Maybe SpatialOS is it.

They’re looking for ‘developer partners’ btw.


  1. Presumably from GCHQ or the Cabinet Office. 

The free riding that underpins some Internet fortunes

There’s an interesting post on Quartz about the fragility of a complex system — in this case the Web. The gist of it is that there’s a small piece of Javascript code written by a 28-year-old open source programmer named Azer Koçulu which was hosted on npm, a well-known package-manager for open source javascript code.

kik_code

As Quartz tells it,

One of the open-source JavaScript packages Koçulu had written was kik, which helped programmers set up templates for their projects. It wasn’t widely known, but it shared a name with Kik, the messaging app based in Ontario, Canada. On March 11, Koçulu received an email from Bob Stratton, a patent and trademark agent who does contract work for Kik.

Stratton said Kik was preparing to release its own package and asked Koçulu if he could rename his. “Can we get you to rename your kik package?” Stratton wrote.

There then followed some fairly acrimononious back-and-forth between Stratton and Koçulu, who was irritated by a private company wanting him to rename his package.1 In the end, Stratton went to npm, who agreed to take his package down.

A few days later, JavaScript programmers around the world began receiving a strange error message when they tried to run their code. For some, the issue was severe enough to keep some of them from updating apps and services that were already running on the web.

It turned out that lots of applications actually needed Mr Koçulu’s tiny snippet of code if they were to function properly.

This is just the latest illustration of one of the most conveniently-overlooked aspects of the Web (and indeed of the whole Internet), namely that many commercially-profitable enterprises are built on the back of open source code — stuff written by programmers who are willing to put their work into the public domain.

This is one of the dirty secrets of digital technology: some Internet fortunes are the result of free riding on the backs of other people’s (unpaid) work.


  1. To be fair to Mr Stratton, he offered to buy the name ‘kik’, but Mr Koçulu priced it at $30k, which I guess is a bit steep for 11 lines of Javascript. 

One funeral at a time

This morning’s Observer column:

Science advances, said the great German physicist Max Planck, “one funeral at a time”. Actually, this is a paraphrase of what he really said, which was: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” But you get the drift.

I always think of Planck’s aphorism whenever moral panic breaks out over the supposedly dizzying pace of technological change…

Read on

Implications of AlphaGo’s victory

Many and varied, I guess, and there will be lots of fevered speculation. But I liked this summary by Quartz’s Gideon Lichfield:

“It’s not a human move.”

What shocked the grandmasters watching Lee Sedol, one of the world’s top Go players, lose to a computer on Thursday was not that the computer won, but how it won. A pivotal move by AlphaGo, a project of Google AI subsidiary DeepMind, was so unexpected, so at odds with 2,500 years of Go history and wisdom, that some thought it must be a glitch.

Lee’s third game against AlphaGo is this Saturday. Even if man recovers to beat the machine, what we will remember is that moment of bewilderment. Go is much more complex than chess; to play it, as DeepMind’s CEO explained, AlphaGo needs the computer equivalent of intuition. And as Sedol discovered, that intuition is not of the human kind.

A classic fear about AI is that the machines we build to serve us will destroy us instead, not because they become sentient and malicious, but because they devise unforeseen and catastrophic ways to reach the goals we set them. Worse, if they do become sentient and malicious, then—like Ava, the android in the movie Ex Machina—we may not even realize until it’s too late, because the way they think will be unrecognizable to us. What we call common sense and logic will be revealed as small-minded prejudices, baked in by aeons of biological and social evolution, which trap us in a tiny corner of the possible intellectual universe.

But there is a rosier view: that the machines, sentient or not, could help us break our intellectual bonds and see solutions—whether to Go, or to bigger problems—that we couldn’t imagine otherwise. “So beautiful,” as one grandmaster said of AlphaGo’s game. “So beautiful.”

How AlphaGo works

Very good explainer from the Economist:

AlphaGo uses some of the same technologies as those older programs. But its big idea is to combine them with new approaches that try to get the computer to develop its own intuition about how to play—to discover for itself the rules that human players understand but cannot explain. It does that using a technique called deep learning, which lets computers work out, by repeatedly applying complicated statistics, how to extract general rules from masses of noisy data.

Deep learning requires two things: plenty of processing grunt and plenty of data to learn from. DeepMind trained its machine on a sample of 30m Go positions culled from online servers where amateurs and professionals gather to play. And by having AlphaGo play against another, slightly tweaked version of itself, more training data can be generated quickly.

Those data are fed into two deep-learning algorithms. One, called the policy network, is trained to imitate human play. After watching millions of games, it has learned to extract features, principles and rules of thumb. Its job during a game is to look at the board’s state and generate a handful of promising-looking moves for the second algorithm to consider.

This algorithm, called the value network, evaluates how strong a move is. The machine plays out the suggestions of the policy network, making moves and countermoves for the thousands of possible daughter games those suggestions could give rise to. Because Go is so complex, playing all conceivable games through to the end is impossible. Instead, the value network looks at the likely state of the board several moves ahead and compares those states with examples it has seen before. The idea is to find the board state that looks, statistically speaking, most like the sorts of board states that have led to wins in the past. Together, the policy and value networks embody the Go-playing wisdom that human players accumulate over years of practice.

As I write this, the score in the best-of-five games between AlphaGo and Lee Sedol, who is generally reckoned to be the world’s best player, stands at 2-nil in favour of AlphaGo.

LATER AlphaGo won the third match. Game over.

Talking to machines

This morning’s Observer column:

Like many people nowadays, I do not talk on my iPhone as much as talk to it. That’s because it runs a program called Siri (Speech Interpretation and Recognition Interface) that works as an intelligent personal assistant and knowledge navigator. It’s useful, in a way. If I ask it for “weather in London today”, it’ll present an hour-by-hour weather forecast. Tell it to “phone home” and it’ll make a decent effort to find the relevant number. Ask it to “text James” and it will come back with: “What do you want to say to James?” Not exactly Socratic dialogue, but it has its uses.

Ask Siri: “What’s the meaning of life?”, however, and it loses its nerve…

Read on

The Wille E Coyote effect

Benedict Evans is at the huge annual mobile phone gabfest in Barcelona. On his way he wrote a very thoughtful blog post about the world before smartphones, and why Nokia and Blackberry didn’t see their demises coming.

Michael Mace wrote a great piece just at the point of collapse for Blackberry, looking into the problem of lagging indicators. The headline metrics tend to be the last ones to start slowing down, and that tends to happen only when it’s too late. So it can look as though you’re doing fine and that the people who said three years ago that there was a major strategic problem were wrong. You might call this the ‘Wille E Coyote effect’ – you’ve run off the cliff, but you’re not falling, and everything seems fine. But by the time you start falling, it’s too late.

That is, using metrics that point up and to the right to refute a suggestion there is a major strategic problem can be very satisfying, but unless you’re very careful, you could be winning the wrong argument. Switching metaphors, Nokia and Blackberry were skating to where the puck was going to be, and felt nice and fast and in control, while Apple and Google were melting the ice rink and switching the game to water-skiing.

I love that last metaphor.

In a way, it was another example of Clayton Christensen’s ‘innovator’s dilemma’. It’s the companies that are doing just fine that may be most endangered.

It’s a great blog post, worth reading in full. Also reminds us that mobile telephony was much more primitive in the US than it was in Europe (because of the GSM standard over here), and that Steve Jobs and co really hated their ‘feature’ phones as primitive devices. Evans sees something similar happening now with cars. It’s no accident, he thinks, that tech companies (Apple, Google) are working on cars. Techies hate cars in their current crude manifestations, whereas the folks who work in the automobile industry love them. Just as Nokia engineers once loved their hardware.

Whither Twitter?

My comment piece in today’s Observer.

If there’s one thing Wall Street and the tech industry fears, it is the idea that something potentially profitable might peak or reach some kind of equilibrium point. Endless exponential growth is what investors seek. Whereas you or I might think that a company with more than 300 million regular users that pulls in $710m in revenues is doing OK, Wall Street sees it as a potential zombie.

At the root of the dissonance is the fact that Twitter is a public company. At its flotation in November 2013 it was valued at $32bn, a figure largely based on hopes (or fantasies) that it would keep modifying its service to attract mainstream users, that its advertising business would continue to grow at a phenomenal rate and that it would eventually be bigger than Facebook.

It didn’t do all these things, for various reasons, the most important of which is that it wasn’t (and isn’t) a “social networking” service in the Facebook sense. At the heart of the distinction is the fact that, whereas it is easy to give an answer to the question “What is Facebook?”, the answers for Twitter depend on who you ask…

Read on