The best metaphor for the net is to think of it as a mirror held up to human nature. All human life really is there. There’s no ideology, fetish, behaviour, obsession, perversion, eccentricity or fad that doesn’t find expression somewhere online. And while much of what we see reflected back to us is uplifting, banal, intriguing, harmless or fascinating, some of it is truly awful, for the simple reason that human nature is not only infinitely diverse but also sometimes unspeakably cruel.
In the early days of the internet and, later, the web, this didn’t matter so much. But once cyberspace was captured by a few giant platforms, particularly Google, YouTube, Twitter and Facebook, then it became problematic. The business models of these platforms depended on encouraging people to upload content to them in digital torrents. “Broadcast yourself”, remember, was once the motto of YouTube.
And people did – as they slit the throats of hostages in the deserts of Arabia, raped three-year-old girls, shot an old man in the street, firebombed the villages of ethnic minorities or hanged themselves on camera…
All of which posed a problem for the social media brands, which liked to present themselves as facilitators of creativity, connectivity and good clean fun, an image threatened by the tide of crud that was coming at them. So they started employing people to filter and manage it. They were called “moderators” and for a long time they were kept firmly under wraps, so that nobody knew about them.
That cloak of invisibility began to fray as journalists and scholars started to probe this dark underbelly of social media…
Sarah Roberts has just published Behind the Screen: Content Moderation in the Shadows of Social Media, a major study of the impact of content ‘moderation’ on those who clean up social media so that the rest of us are not traumatised or scandalised by what appears in our feeds. Isaac Chotiner has an interesting interview with her in the New Yorker which includes this brief exchange:
You also go to the Philippines in this book and you talk to people from other countries, in Mexico, for example. What are the consequences of outsourcing these jobs in terms of the quality of the work being done? And I don’t ask that to imply that people abroad can’t do a job as well.
I think there is a precedent for outsourcing this type of service work, and we see that in the call-center industry. The same kinds of problems that are present in that work are present in this particular context. So that would be things like the dissonance and distance culturally and linguistically, contextually, and politically, for a group of people that are being asked to adjudicate and make decisions about material that emanates from one place in the world and is destined for another, that may have absolutely nothing to do with their day-to-day life.
I think a second thing is that the marketplace has chased a globalization solution for the same reasons it has in other industries, which are the issues of: Where can we get the cheapest labor? What countries are lax in terms of labor protections? Where is organizing low? Where is there a huge pool of people for whom this job might be appealing because it’s better than the other jobs on offer? It’s not a simple case of everyone in the Philippines who does this work is exploited, and I was really trying hard not to make that claim in the book. But, at the same time, the United States sends the work to the Philippines for a reason. It sends the work there because Filipino people have a long-standing relationship, so to speak, with the United States, that means that they have a better facility to understand the American context. That’s actually been in the favor of most people in the Philippines.
It’s worrisome to see those kinds of colonial traditions and practices picked up again, especially in this digital marketplace, this marketplace of the mind that was supposed to be deliverance from so many of the difficult working conditions of the twentieth century. So I think that’s the big thing about the way that this plays out on the global stage. The companies have a problem that they don’t have enough people to do the work. And so they are pulling out all the stops in a way to find people to do the work, but it’s still not nearly enough.
What could be done to make the lives of these workers better, given that this is a job that needs to be done? And it needs to be done by smart people doing it well, who need to be very well-trained.
This is a question that I’ve often posed to the workers themselves because I certainly am not possessed of the answers on my own. They want better pay. And I think we can read that in a lot of ways: they want better pay, they want to be respected. The nature of the way the work has been designed has been for the work to be secret. In many cases, their N.D.A. precludes them from even talking about the work. And the industry itself formulated the job as a source of shame in that sense, an industry source of shame. They were not eager to tout the efforts of these people, and so instead they hid them in the shadows. And, if nothing else, that was a business decision and a value judgment that could have gone another way. I think there’s still a chance that we could understand the work of these people in a different way and value it differently, collectively. And we could ask that the companies do that as well.
Good interview. Splendid book.
This morning’s Observer column:
The most worrying thought that comes from immersion in accounts of the tech companies’ struggle against the deluge of uploads is not so much that murderous fanatics seek publicity and notoriety from livestreaming their atrocities on the internet, but that astonishing numbers of other people are not just receptive to their messages, but seem determined to boost and amplify their impact by “sharing” them.
And not just sharing them in the sense of pressing the “share” button. What YouTube engineers found was that the deluge contained lots of copies and clips of the Christchurch video that had been deliberately tweaked so that they would not be detected by the company’s AI systems. A simple way of doing this, it turned out, was to upload a video recording of a computer screen taken from an angle. The content comes over loud and clear, but the automated filter doesn’t recognise it.
That there are perhaps tens – perhaps hundreds – of thousands of people across the world who will do this kind of thing is a really scary discovery…
Thoughtful and sombre commentary by Kevin Roose:
Now, online extremism is just regular extremism on steroids. There is no offline equivalent of the experience of being algorithmically nudged toward a more strident version of your existing beliefs, or having an invisible hand steer you from gaming videos to neo-Nazism. The internet is now the place where the seeds of extremism are planted and watered, where platform incentives guide creators toward the ideological poles, and where people with hateful and violent beliefs can find and feed off one another.
So the pattern continues. People become fluent in the culture of online extremism, they make and consume edgy memes, they cluster and harden. And once in a while, one of them erupts.
In the coming days, we should attempt to find meaning in the lives of the victims of the Christchurch attack, and not glorify the attention-grabbing tactics of the gunman. We should also address the specific horror of anti-Muslim violence.
At the same time, we need to understand and address the poisonous pipeline of extremism that has emerged over the past several years, whose ultimate effects are impossible to quantify but clearly far too big to ignore. It’s not going away, and it’s not particularly getting better. We will feel it for years to come.
This morning’s Observer column:
My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.
Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.
Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…
This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.
Couldn’t happen to nastier people.
In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.
SAN FRANCISCO (CN) – A federal judge on Friday rejected Facebook’s argument that it cannot be sued for letting third parties, such as Cambridge Analytica, access users’ private data because no “real world” harm has resulted from the conduct.
“The injury is the disclosure of private information,” U.S. District Judge Vince Chhabria declared during a marathon four-and-a-half-hour motion-to-dismiss hearing Friday.
Facebook urged Chhabria to toss out a 267-page consolidated complaint filed in a multidistrict case seeking billions of dollars in damages for Facebook’s alleged violations of 50 state and federal laws.
There’s a class-action suit coming triggered by the Cambridge-Analytica scandal.
From Farhad Manjoo:
I’ve significantly cut back how much time I spend on Twitter, and — other than to self-servingly promote my articles and engage with my readers — I almost never tweet about the news anymore.
I began pulling back last year — not because I’m morally superior to other journalists but because I worried I was weaker.
I’ve been a Twitter addict since Twitter was founded. For years, I tweeted every ingenious and idiotic thought that came into my head, whenever, wherever; I tweeted from my wedding and during my kids’ births, and there was little more pleasing in life than hanging out on Twitter poring over hot news as it broke.
But Twitter is not that carefree clubhouse for journalism anymore. Instead it is the epicenter of a nonstop information war, an almost comically undermanaged gladiatorial arena where activists and disinformation artists and politicians and marketers gather to target and influence the wider media world.
And journalists should stop paying so much attention to what goes on in this toxic information sewer.
This morning’s Observer column:
At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.
We know this because the US Senate select committee on intelligence commissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.
Sign up to our Brexit weekly briefing
Last week the committee released both reports. They make for sobering reading…
My OpEd piece from yesterday’s Observer:
Conspiracy theories have generally had a bad press. They conjure up images of eccentrics in tinfoil hats who believe that aliens have landed and the government is hushing up the news. And maybe it’s statistically true that most conspiracy theories belong on the harmless fringe of the credibility spectrum.
On the other hand, the historical record contains some conspiracy theories that have had profound effects. Take the “stab in the back” myth, widely believed in Germany after 1918, which held that the German army did not lose the First World War on the battlefield but was betrayed by civilians on the home front. When the Nazis came to power in 1933 the theory was incorporated in their revisionist narrative of the 1920s: the Weimar Republic was the creation of the “November criminals” who stabbed the nation in the back to seize power while betraying it. So a conspiracy theory became the inspiration for the political changes that led to a second global conflict.
More recent examples relate to the alleged dangers of the MMR jab and other vaccinations and the various conspiracy theories fuelling denial of climate change.
For the last five years, my academic colleagues – historian Richard Evans and politics professor David Runciman – and I have been leading a team of researchers studying the history, nature and significance of conspiracy theories with a particular emphasis on their implications for democracy…