What happens when algorithms decide what should be passed on?

One of the things we’re interested in on our research project is how rumours, news, information (and mis-information) can spread with astonishing speed across the world as a result of the Internet. Up to now I had been mostly working on the assumption that the fundamental mechanism involved is always something like the ‘retweet’ in Twitter — i.e. people coming on something that they wanted to pass on to others for whatever reason. So human agency was the key factor in viral retransmission of memes.

But I’ve just seen an interesting article in the Boston Globe which suggests that we need to think of the ‘retweeting’ effect in wider terms.

A surprise awaited Facebook users who recently clicked on a link to read a story about Michelle Obama’s encounter with a 10-year-old girl whose father was jobless.

Facebook responded to the click by offering what it called “related articles.” These included one that alleged a Secret Service officer had found the president and his wife having “S*X in Oval Office,” and another that said “Barack has lost all control of Michelle” and was considering divorce.

A Facebook spokeswoman did not try to defend the content, much of which was clearly false, but instead said there was a simple explanation for why such stories are pushed on readers. In a word: algorithms.

The stories, in other words, apparently are selected by Facebook based on mathematical calculations that rely on word association and the popularity of an article. No effort is made to vet or verify the content.

This prompted a comment from my former Observer colleague, Emily Bell, who now runs the Tow Center at Columbia. “They have really screwed up,” she told the Globe. “If you are spreading false information, you have a serious problem on your hands. They shouldn’t be recommending stories until they have got it figured out.”

She’s right, of course. A world in which algorithms decided what was ‘newsworthy’ would be a very strange place. But we might get find ourselves living in such a world, because Facebook won’t take responsibility for its algorithms, any more than Google will take responsibility for YouTube videos. These companies want the status of common carriers because otherwise they have to assume legal responsibility for the messages they circulate. And having to check everything that goes through their servers is simply not feasible.