Facebook meets irresistible force

Terrific blog post by Josh Marshall:

I believe what we’re seeing here is a convergence of two separate but highly charged news streams and political moments. On the one hand, you have the Russia probe, with all that is tied to that investigation. On another, you have the rising public backlash against Big Tech, the various threats it arguably poses and its outsized power in the American economy and American public life. A couple weeks ago, I wrote that after working with Google in various capacities for more than a decade I’d observed that Google is, institutionally, so accustomed to its customers actually being its products that when it gets into lines of business where its customers are really customers it really doesn’t know how to deal with them. There’s something comparable with Facebook.

Facebook is so accustomed to treating its ‘internal policies’ as though they were something like laws that they appear to have a sort of blind spot that prevents them from seeing how ridiculous their resistance sounds. To use the cliche, it feels like a real shark jumping moment. As someone recently observed, Facebook’s ‘internal policies’ are crafted to create the appearance of civic concerns for privacy, free speech, and other similar concerns. But they’re actually just a business model. Facebook’s ‘internal policies’ amount to a kind of Stepford Wives version of civic liberalism and speech and privacy rights, the outward form of the things preserved while the innards have been gutted and replaced by something entirely different, an aggressive and totalizing business model which in many ways turns these norms and values on their heads. More to the point, most people have the experience of Facebook’s ‘internal policies’ being meaningless in terms of protecting their speech or privacy or whatever as soon as they bump up against Facebook’s business model.

Spot on. Especially the Stepford Wives metaphor.

Why fake news will be hard to fix — it’s the users, stoopid

Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.

This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.

Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.

If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?

If you want to understand why fake news will be a hard problem to crack, this is a good place to start.

Political views warp your judgement — and how

Fascinating — and scary — piece of research reported in the Washington Post. On Sunday and Monday, YouGov surveyed 1,388 American adults. Researchers showed half of them this crowd picture from each inauguration and asked which was from Trump’s inauguration and which was from Obama’s. The other half were simply asked which picture shows more people.

Simple, eh? Well, guess what?

Zuckerberg, truth and ‘meaningfulness’

Wow! The controversy about fake news on Facebook during the election has finally got to the Boss. Mark Zuckerberg wrote a long status update (aka blog post) on the subject. Here’s a sample:

Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.

That said, we don’t want any hoaxes on Facebook. Our goal is to show people the content they will find most meaningful, and people want accurate news. We have already launched work enabling our community to flag hoaxes and fake news, and there is more we can do here. We have made progress, and we will continue to work on this to improve further.

This is an area where I believe we must proceed very carefully though. Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves.

Well, he’s right about the elusiveness of ‘the truth’. But ‘meaningful’ ain’t it. Zuckerberg is squirming on the hook of editorial responsibility — which he desperately doesn’t want to have.

Collateral damage and the NSA’s stash of cyberweapons

This morning’s Observer column:

All software has bugs and all networked systems have security holes in them. If you wanted to build a model of our online world out of cheese, you’d need emmental to make it realistic. These holes (vulnerabilities) are constantly being discovered and patched, but the process by which this happens is, inevitably, reactive. Someone discovers a vulnerability, reports it either to the software company that wrote the code or to US-CERT, the United States Computer Emergency Readiness Team. A fix for the vulnerability is then devised and a “patch” is issued by computer security companies such as Kaspersky and/or by software and computer companies. At the receiving end, it is hoped that computer users and network administrators will then install the patch. Some do, but many don’t, alas.

It’s a lousy system, but it’s the only one we’ve got. It has two obvious flaws. The first is that the response always lags behind the threat by days, weeks or months, during which the malicious software that exploits the vulnerability is doing its ghastly work. The second is that it is completely dependent on people reporting the vulnerabilities that they have discovered.

Zero-day vulnerabilities are the unreported ones…

Read on