The bad news about false news

The most comprehensive study to date of misinformation on Twitter is out. The Abstract reads:

We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise 126,000 stories tweeted by 3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it. We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise 126,000 stories tweeted by 3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

‘Complexity’: ontology or just an epistemological tactic?

I’m reading Philip Mirowski’s Never Let A Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. In Chapter 1 he reflects on the curious fact that nothing much changed as a result. “The strangest thing”, he writes,

was that instead of leading to a collapse of the right-wing neoliberalism that had enabled the catastrophe to happen, the crisis actually seemed to strengthen the Right. It took a rare degree of self-confidence or fortitude not to gasp dumbfounded at the roaring resurgence of the right so soon after the most catastrophic global economic collapse after the Great Depression of the 1930s. “Incongruity” seems too polite a term to describe the unfolding of events; “contradiction” seems too outmoded. Austerity became the watchword in almost every country; governments everywhere became the scapegoats for dissatisfaction of every stripe, including that provoked by austerity. In the name of probity, the working class was attacked from all sides, even by nominal “socialist” parties… The pervasive dominance of neoliberal doctrines and right-wing parties worldwide from Europe to North America to Asia has flummoxed left parties that, just a few short years ago, had been confident they had been finally making headway after decades of neoliberal encroachment. Brazenly, in many cases parties on the left were unceremoniously voted out because they had struggled to contain the worst fallout from the crisis. By contrast, the financial institutions that had precipitated the crisis and had been rescued by governmental action were doing just fine — nay, prospering at pre-crisis rates — and in a bald display of uninflected ingratitude, were intently bankrolling the resurgent right. Indeed, the astounding recovery of corporate profits practically guaranteed the luxuriant post-crisis exfoliation of Think Tank Pontification. nationalist proto-fascist movements sprouted in the most unlikely places, and propounded arguments bereft of a scintilla of sense. “Nightmare” did not register as hyperbolic; it was the banjax of the vanities.

That’s just about the most succinct expression of the bewilderment that most of us felt — or certainly that I felt as I watched the UK post-crisis, Tory-led coalition government blaming the populace (or its Labour predecessor) for the debacle, and imposing ‘austerity’ as the punishment for popular irresponsibility rather than as the price of forcing the public to shoulder the costs of bankers’ greed and recklessness. And it’s why I always thought that, eventually, the penny would drop with electorates, and why the current ways of populist anger towards ‘elites’ comes as no surprise. In fact the only surprising thing about it is that it took so long to materialise.

Mirowski also picks up the strange inability of the left to pin the blame where it belonged: the financial services industry and the feeble regulatory regimes under which the madness and greed of the sector burgeoned. Here, for example, is Ezra Klein reviewing Inside Job, a documentary that made an admirable stab at naming names and fingering culprits. What made the financial crisis so scary, Klein wrote, was that

The complexity of the system far exceeded the capacity of the participants, experts and watchdogs. Even after the crisis happened, it was devilishly hard to understand what was going on. Some people managed to connect the right dots, in the right ways and at the right times, but not so many; and not through such reproducible methods, that it’s clear how we can make their success the norm. But it is clear that our key systems are going to continue growing more complex, and we’re not getting any smarter.

The fact that (as Mirowski points out) some commentators normally seen as left-of-centre felt obliged to attack the documentary is itself significant. It’s a symptom of how far the ice of neoliberalism has penetrated the radical soul. Less abstractly, it confirms my own working definition of ‘ideology’ as the force that determines how you think even when you don’t know you’re thinking. Klein’s hapless defeatism also echoes the feeble answer eventually provided by the British Academy to the question posed by the Queen to the luminaries of the LSE at the height of the crisis: why had none of those besuited, learned gents in the receiving line seen the catastrophe coming?

But against those who warned, most were convinced that banks knew what they were doing. They believed that the financial wizards had found new and clever ways of managing risks. Indeed, some claimed to have so dispersed them through an array of novel financial instruments that they had virtually removed them. It is difficult to recall a greater example of wishful thinking combined with hubris. There was a firm belief, too, that financial markets had changed. And politicians of all types were charmed by the market. These views were abetted by financial and economic models that were good at predicting the short-term and small risks, but few were equipped to say what would happen when things went wrong as they have. People trusted the banks whose boards and senior executives were packed with globally recruited talent and their non-executive directors included those with proven track records in public life. Nobody wanted to believe that their judgement could be faulty or that they were unable competently to scrutinise the risks in the organisations that they managed. A generation of bankers and financiers deceived themselves and those who thought that they were the pace-making engineers of advanced economies.

All this exposed the difficulties of slowing the progression of such developments in the presence of a general ‘feel-good’ factor. Households benefited from low unemployment, cheap consumer goods and ready credit. Businesses benefited from lower borrowing costs. Bankers were earning bumper bonuses and expanding their business around the world. The government benefited from high tax revenues enabling them to increase public spending on schools and hospitals. This was bound to create a psychology of denial. It was a cycle fuelled, in significant measure, not by virtue but by delusion.

Among the authorities charged with managing these risks, there were difficulties too. Some say that their job should have been ‘to take away the punch bowl when the party was in full swing’. But that assumes that they had the instruments needed to do this. General pressure was for more lax regulation – a light touch. The City of London (and the Financial Services Authority) was praised as a paragon of global financial regulation for this reason.

Translation: It was all very complex, Ma’am. QED.

This is the resort to ‘complexity’ as an epistemological or ideological device. It’s a way of saying that some things are beyond analysis or explanation. Sometimes this is true: complex systems exist and they are inherently unpredictable and sometimes intrinsically incomprehensible. But a banking system run as a racket does not fall into that category.

Zuckerberg’s Frankenstein problem

Nice Buzzfeed piece by xxx about whether Zuck is really in charge, despite his controlling shares. Here’s the nub:

Facebook’s response to accusations about its role in the 2016 election since Nov. 9 bears this out, most notably Zuckerberg’s public comments immediately following the election that the claim that fake news influenced the US presidential election was “a pretty crazy idea.” In April, when Facebook released a white paper detailing the results of its investigation into fake news on its platform during the election, the company insisted it did not know the identity of the malicious actors using its network. And after recent revelations that Facebook had discovered Russian ads on its platform, the company maintained that as of April 2017, it was unaware of any Russian involvement. “When asked we said there was no evidence of Russian ads. That was true at the time,” Facebook told Mashable earlier this month.

Some critics of Facebook speak about the company’s leadership almost like an authoritarian government — a sovereign entity with virtually unchecked power and domineering ambition. So much so, in fact, that Zuckerberg is now frequently mentioned as a possible presidential candidate despite his public denials. But perhaps a better comparison might be the United Nations — a group of individuals endowed with the almost impossible responsibility of policing a network of interconnected autonomous powers. Just take Zuckerberg’s statement this week, in which he sounded strikingly like an embattled secretary-general: “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion,” he said.

Nice metaphor, this.

The Technical is Political

This morning’s Observer column:

In his wonderful book The Swerve: How the Renaissance Began, the literary historian Stephen Greenblatt traces the origins of the Renaissance back to the rediscovery of a 2,000-year-old poem by Lucretius, De Rerum Natura (On the Nature of Things). The book is a riveting explanation of how a huge cultural shift can ultimately spring from faint stirrings in the undergrowth.

Professor Greenblatt is probably not interested in the giant corporations that now dominate our world, but I am, and in the spirit of The Swerve I’ve been looking for signs that big changes might be on the way. You don’t have to dig very deep to find them…

Read on

Facebook meets irresistible force

Terrific blog post by Josh Marshall:

I believe what we’re seeing here is a convergence of two separate but highly charged news streams and political moments. On the one hand, you have the Russia probe, with all that is tied to that investigation. On another, you have the rising public backlash against Big Tech, the various threats it arguably poses and its outsized power in the American economy and American public life. A couple weeks ago, I wrote that after working with Google in various capacities for more than a decade I’d observed that Google is, institutionally, so accustomed to its customers actually being its products that when it gets into lines of business where its customers are really customers it really doesn’t know how to deal with them. There’s something comparable with Facebook.

Facebook is so accustomed to treating its ‘internal policies’ as though they were something like laws that they appear to have a sort of blind spot that prevents them from seeing how ridiculous their resistance sounds. To use the cliche, it feels like a real shark jumping moment. As someone recently observed, Facebook’s ‘internal policies’ are crafted to create the appearance of civic concerns for privacy, free speech, and other similar concerns. But they’re actually just a business model. Facebook’s ‘internal policies’ amount to a kind of Stepford Wives version of civic liberalism and speech and privacy rights, the outward form of the things preserved while the innards have been gutted and replaced by something entirely different, an aggressive and totalizing business model which in many ways turns these norms and values on their heads. More to the point, most people have the experience of Facebook’s ‘internal policies’ being meaningless in terms of protecting their speech or privacy or whatever as soon as they bump up against Facebook’s business model.

Spot on. Especially the Stepford Wives metaphor.

Why fake news will be hard to fix — it’s the users, stoopid

Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.

This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.

Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.

If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?

If you want to understand why fake news will be a hard problem to crack, this is a good place to start.

Political views warp your judgement — and how

Fascinating — and scary — piece of research reported in the Washington Post. On Sunday and Monday, YouGov surveyed 1,388 American adults. Researchers showed half of them this crowd picture from each inauguration and asked which was from Trump’s inauguration and which was from Obama’s. The other half were simply asked which picture shows more people.

Simple, eh? Well, guess what?

Zuckerberg, truth and ‘meaningfulness’

Wow! The controversy about fake news on Facebook during the election has finally got to the Boss. Mark Zuckerberg wrote a long status update (aka blog post) on the subject. Here’s a sample:

Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.

That said, we don’t want any hoaxes on Facebook. Our goal is to show people the content they will find most meaningful, and people want accurate news. We have already launched work enabling our community to flag hoaxes and fake news, and there is more we can do here. We have made progress, and we will continue to work on this to improve further.

This is an area where I believe we must proceed very carefully though. Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves.

Well, he’s right about the elusiveness of ‘the truth’. But ‘meaningful’ ain’t it. Zuckerberg is squirming on the hook of editorial responsibility — which he desperately doesn’t want to have.

Collateral damage and the NSA’s stash of cyberweapons

This morning’s Observer column:

All software has bugs and all networked systems have security holes in them. If you wanted to build a model of our online world out of cheese, you’d need emmental to make it realistic. These holes (vulnerabilities) are constantly being discovered and patched, but the process by which this happens is, inevitably, reactive. Someone discovers a vulnerability, reports it either to the software company that wrote the code or to US-CERT, the United States Computer Emergency Readiness Team. A fix for the vulnerability is then devised and a “patch” is issued by computer security companies such as Kaspersky and/or by software and computer companies. At the receiving end, it is hoped that computer users and network administrators will then install the patch. Some do, but many don’t, alas.

It’s a lousy system, but it’s the only one we’ve got. It has two obvious flaws. The first is that the response always lags behind the threat by days, weeks or months, during which the malicious software that exploits the vulnerability is doing its ghastly work. The second is that it is completely dependent on people reporting the vulnerabilities that they have discovered.

Zero-day vulnerabilities are the unreported ones…

Read on