This morning’s Observer column:
The most worrying thought that comes from immersion in accounts of the tech companies’ struggle against the deluge of uploads is not so much that murderous fanatics seek publicity and notoriety from livestreaming their atrocities on the internet, but that astonishing numbers of other people are not just receptive to their messages, but seem determined to boost and amplify their impact by “sharing” them.
And not just sharing them in the sense of pressing the “share” button. What YouTube engineers found was that the deluge contained lots of copies and clips of the Christchurch video that had been deliberately tweaked so that they would not be detected by the company’s AI systems. A simple way of doing this, it turned out, was to upload a video recording of a computer screen taken from an angle. The content comes over loud and clear, but the automated filter doesn’t recognise it.
That there are perhaps tens – perhaps hundreds – of thousands of people across the world who will do this kind of thing is a really scary discovery…
Thoughtful and sombre commentary by Kevin Roose:
Now, online extremism is just regular extremism on steroids. There is no offline equivalent of the experience of being algorithmically nudged toward a more strident version of your existing beliefs, or having an invisible hand steer you from gaming videos to neo-Nazism. The internet is now the place where the seeds of extremism are planted and watered, where platform incentives guide creators toward the ideological poles, and where people with hateful and violent beliefs can find and feed off one another.
So the pattern continues. People become fluent in the culture of online extremism, they make and consume edgy memes, they cluster and harden. And once in a while, one of them erupts.
In the coming days, we should attempt to find meaning in the lives of the victims of the Christchurch attack, and not glorify the attention-grabbing tactics of the gunman. We should also address the specific horror of anti-Muslim violence.
At the same time, we need to understand and address the poisonous pipeline of extremism that has emerged over the past several years, whose ultimate effects are impossible to quantify but clearly far too big to ignore. It’s not going away, and it’s not particularly getting better. We will feel it for years to come.
This morning’s Observer column:
My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.
Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.
Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…
This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.
Couldn’t happen to nastier people.
In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.
SAN FRANCISCO (CN) – A federal judge on Friday rejected Facebook’s argument that it cannot be sued for letting third parties, such as Cambridge Analytica, access users’ private data because no “real world” harm has resulted from the conduct.
“The injury is the disclosure of private information,” U.S. District Judge Vince Chhabria declared during a marathon four-and-a-half-hour motion-to-dismiss hearing Friday.
Facebook urged Chhabria to toss out a 267-page consolidated complaint filed in a multidistrict case seeking billions of dollars in damages for Facebook’s alleged violations of 50 state and federal laws.
There’s a class-action suit coming triggered by the Cambridge-Analytica scandal.
From Farhad Manjoo:
I’ve significantly cut back how much time I spend on Twitter, and — other than to self-servingly promote my articles and engage with my readers — I almost never tweet about the news anymore.
I began pulling back last year — not because I’m morally superior to other journalists but because I worried I was weaker.
I’ve been a Twitter addict since Twitter was founded. For years, I tweeted every ingenious and idiotic thought that came into my head, whenever, wherever; I tweeted from my wedding and during my kids’ births, and there was little more pleasing in life than hanging out on Twitter poring over hot news as it broke.
But Twitter is not that carefree clubhouse for journalism anymore. Instead it is the epicenter of a nonstop information war, an almost comically undermanaged gladiatorial arena where activists and disinformation artists and politicians and marketers gather to target and influence the wider media world.
And journalists should stop paying so much attention to what goes on in this toxic information sewer.
This morning’s Observer column:
At last, we’re getting somewhere. Two years after Brexit and the election of Donald Trump, we’re finally beginning to understand the nature and extent of Russian interference in the democratic processes of two western democracies. The headlines are: the interference was much greater than what was belatedly discovered and/or admitted by the social media companies; it was more imaginative, ingenious and effective than we had previously supposed; and it’s still going on.
We know this because the US Senate select committee on intelligence commissioned major investigations by two independent teams. One involved New Knowledge, a US cybersecurity firm, plus researchers from Columbia University in New York and a mysterious outfit called Canfield Research. The other was a team comprising the Oxford Internet Institute’s “Computational Propaganda” project and Graphika, a company specialising in analysing social media.
Sign up to our Brexit weekly briefing
Last week the committee released both reports. They make for sobering reading…
My OpEd piece from yesterday’s Observer:
Conspiracy theories have generally had a bad press. They conjure up images of eccentrics in tinfoil hats who believe that aliens have landed and the government is hushing up the news. And maybe it’s statistically true that most conspiracy theories belong on the harmless fringe of the credibility spectrum.
On the other hand, the historical record contains some conspiracy theories that have had profound effects. Take the “stab in the back” myth, widely believed in Germany after 1918, which held that the German army did not lose the First World War on the battlefield but was betrayed by civilians on the home front. When the Nazis came to power in 1933 the theory was incorporated in their revisionist narrative of the 1920s: the Weimar Republic was the creation of the “November criminals” who stabbed the nation in the back to seize power while betraying it. So a conspiracy theory became the inspiration for the political changes that led to a second global conflict.
More recent examples relate to the alleged dangers of the MMR jab and other vaccinations and the various conspiracy theories fuelling denial of climate change.
For the last five years, my academic colleagues – historian Richard Evans and politics professor David Runciman – and I have been leading a team of researchers studying the history, nature and significance of conspiracy theories with a particular emphasis on their implications for democracy…
Terrific FT column by Rana Foroohar. Sample:
If the Facebook revelations prove anything, they show that its top leadership is not liberal, but selfishly libertarian. Political ideals will not get in the way of the company’s efforts to protect its share price. This was made clear by Facebook’s hiring of a rightwing consulting group, Definers Public Affairs, to try and spread misinformation about industry rivals to reporters and to demonise George Soros, who had a pipe bomb delivered to his home. At Davos in January, the billionaire investor made a speech questioning the power of platform technology companies.
Think about that for a minute. This is a company that was so desperate to protect its top leadership and its business model that it hired a shadowy PR firm that used anti-Semitism as a political weapon. Patrick Gaspard, president of the Open Society Foundations, founded by Mr Soros, wrote in a letter last week to Ms Sandberg: “The notion that your company, at your direction”, tried to “discredit people exercising their First Amendment rights to protest Facebook’s role in disseminating vile propaganda is frankly astonishing to me”.
I couldn’t agree more. Ms Sandberg says she didn’t know about the tactics being used by Definers Public Affairs. Mr Zuckerberg says that while he understands “DC type firms” might use such tactics, he doesn’t want them associated with Facebook and has cancelled its contract with Definers.
The irony of that statement could be cut with a knife. Silicon Valley companies are among the nation’s biggest corporate lobbyists. They’ve funded many academics doing research on topics of interest to them, and have made large donations to many powerful politicians…
There is a strange consistency in the cant coming from Zuckerberg and Sandberg as they try to respond to the NYT‘s exhumation of their attempts to avoid responsibility for Facebook’s malignancy. It’s what PR flacks call “plausible deniability”. Time and again, the despicable or ethically-dubious actions taken by Facebook apparently come as a complete surprise to the two at the very top of the company — Zuckerberg and Sandberg. I’m afraid that particular cover story is beginning to look threadbare.
Well, well. Maybe we’re — finally — making progress. This from Recode:
Mark Zuckerberg, Sheryl Sandberg and other top Facebook leaders should get ready for increased scrutiny after a damning new investigation shed light on how they stalled, stumbled and plotted through a series of crises over the last two years, including Russian meddling, data sharing and hate speech. The question now: Who does Facebook fire in the aftermath of these revelations? Meanwhile, the difficult past year has taken a toll on employee morale: An internal survey shows that only 52 percent of Facebook staff are optimistic about its future, down from 84 percent of employees last year. It might already be time for a new survey.