Why fake news will be hard to fix — it’s the users, stoopid

Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.

This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.

Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.

If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?

If you want to understand why fake news will be a hard problem to crack, this is a good place to start.

Google’s new power-grab

Google’s Chrome browser is popular world wide. And it turns out that many of its users don’t like ads — which is very naughty of them in an ad-based universe. But now there are rumours that Google plans to incorporate some kind of blocking of “unacceptable” ads in its browser. Which of course might be welcome to many users. But it would also make Google the arbiter of what is “unacceptable”.

Source

Hypocrisy on stilts

Now here’s something you couldn’t make up — unless you have plumbed the depths of surveillance capitalism. Unroll.me is a ‘service’ that promises to help you clean up your inbox. You give it permission to access your Gmail, for example, and: “Instantly see a list of all your subscription emails. Unsubscribe easily from whatever you don’t want.”

Unroll.me is owned by an analytics outfit called Slice Intelligence. And last week the New York Times (in a profile of Uber’s controversial boss, Travis Kalanick) revealed that Unroll was collecting its subscribers’ emailed Lyft receipts from their inboxes and selling the anonymized data to Uber — which used the data as a proxy for the health of its competitor’s business.

Embarrassing, eh? Not at all. Unroll’s boss, Jojo Hedaya, has published a post on the company blog under the headline “We Can Do Better”. “Our users are the heart of our company and service”, it begins,

So it was heartbreaking to see that some of our users were upset to learn about how we monetize our free service.

And while we try our best to be open about our business model, recent customer feedback tells me we weren’t explicit enough.

Note (i) “heartbreaking” and (ii) “recent customer feedback”. Translation: (i) disastrous; (ii) good investigative journalism by the New York Times.

Crocodile tears having been duly shed, Jojo continues:

So we need to do better for our users, and will from this point forward, with clearer messaging on our website, in our app, and in our FAQs. We will also be more clear about our data usage in our on-boarding process. The rest will remain the same: providing a killer service that gives you hours back in your day while protecting your privacy and security above all else.

I can’t stress enough the importance of your privacy. We never, ever release personal data about you. All data is completely anonymous and related to purchases only. To get a sense of what this data looks like and how it is used, check out the Slice Intelligence blog.

Thank you for being such an important part of our company. If there’s more we can be doing better, please let me know.

George Orwell would have really enjoyed this. Schmucks are “such an important part of our company”, for example. And he “can’t stress enough” the importance of said schmucks’ privacy.

But — as Charles Arthur points out — there’s nothing in Jojo’s FAQs about selling the data.