Another reason not to like Facebook Likes

From The Register:

Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.

The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.

The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.

Well, well.

The soft underbelly of social media

Sarah Roberts has just published Behind the Screen: Content Moderation in the Shadows of Social Media, a major study of the impact of content ‘moderation’ on those who clean up social media so that the rest of us are not traumatised or scandalised by what appears in our feeds. Isaac Chotiner has an interesting interview with her in the New Yorker which includes this brief exchange:

You also go to the Philippines in this book and you talk to people from other countries, in Mexico, for example. What are the consequences of outsourcing these jobs in terms of the quality of the work being done? And I don’t ask that to imply that people abroad can’t do a job as well.

I think there is a precedent for outsourcing this type of service work, and we see that in the call-center industry. The same kinds of problems that are present in that work are present in this particular context. So that would be things like the dissonance and distance culturally and linguistically, contextually, and politically, for a group of people that are being asked to adjudicate and make decisions about material that emanates from one place in the world and is destined for another, that may have absolutely nothing to do with their day-to-day life.

I think a second thing is that the marketplace has chased a globalization solution for the same reasons it has in other industries, which are the issues of: Where can we get the cheapest labor? What countries are lax in terms of labor protections? Where is organizing low? Where is there a huge pool of people for whom this job might be appealing because it’s better than the other jobs on offer? It’s not a simple case of everyone in the Philippines who does this work is exploited, and I was really trying hard not to make that claim in the book. But, at the same time, the United States sends the work to the Philippines for a reason. It sends the work there because Filipino people have a long-standing relationship, so to speak, with the United States, that means that they have a better facility to understand the American context. That’s actually been in the favor of most people in the Philippines.

It’s worrisome to see those kinds of colonial traditions and practices picked up again, especially in this digital marketplace, this marketplace of the mind that was supposed to be deliverance from so many of the difficult working conditions of the twentieth century. So I think that’s the big thing about the way that this plays out on the global stage. The companies have a problem that they don’t have enough people to do the work. And so they are pulling out all the stops in a way to find people to do the work, but it’s still not nearly enough.

What could be done to make the lives of these workers better, given that this is a job that needs to be done? And it needs to be done by smart people doing it well, who need to be very well-trained.

This is a question that I’ve often posed to the workers themselves because I certainly am not possessed of the answers on my own. They want better pay. And I think we can read that in a lot of ways: they want better pay, they want to be respected. The nature of the way the work has been designed has been for the work to be secret. In many cases, their N.D.A. precludes them from even talking about the work. And the industry itself formulated the job as a source of shame in that sense, an industry source of shame. They were not eager to tout the efforts of these people, and so instead they hid them in the shadows. And, if nothing else, that was a business decision and a value judgment that could have gone another way. I think there’s still a chance that we could understand the work of these people in a different way and value it differently, collectively. And we could ask that the companies do that as well.

Good interview. Splendid book.

Quote of the Day

Q: We’re now more than two years out from that experience, and obviously the controversies have not gone away — they’ve actually multiplied. Do you think Zuckerberg and Sandberg have made any progress on the stuff you warned about?

A: I want to avoid absolutes, but I think it’s safe to say that the business model is the source of the problem, and that it’s the same business model as before. And to the extent that they made progress, it’s in going after different moles in the Whack-a-Mole game. From the point of view of the audience, Facebook is as threatening as ever.

From an interview with Roger McNamee, an early investor in Facebook and apparently a recovering former mentor to Mark Zuckerberg. He’s also the author of Zucked: Waking Up to the Facebook Catastrophe.

The dark side of recommendation engines

This morning’s Observer column:

My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.

Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.

Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…

Read on

Shoshana Zuboff’s new book

Today’s Observer carries a five-page feature about Shoshana Zuboff’s The Age of Surveillance Capitalism consisting of an intro by me followed by Q&A between me and the author.

LATER Nick Carr has a perceptive review of the book in the LA Review of Books. John Thornhill also had a good long review in last Saturday’s Financial Times, sadly behind a paywall.

Understanding platforms

From an interesting piece by Max Fisher:

We think of any danger as coming from misuse — scammers, hackers, state-sponsored misinformation — but we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses.

Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.

It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.

Anti-semitism continues to thrive online

From today’s New York Times:

SAN FRANCISCO — On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.

A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrated the Sept. 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviation used for the Nazi salute “Heil Hitler.”

The Instagram posts demonstrated a stark reality. Over the last 10 years, Silicon Valley’s social media companies have expanded their reach and influence to the furthest corners of the world. But it has become glaringly apparent that the companies never quite understood the negative consequences of that influence nor what to do about it — and that they cannot put the genie back in the bottle.

“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”

When will this penny drop, one wonders. These companies can’t fix this problem, because their business models depend on allowing people to do what they like — and then reacting, ineffectually, after the fact.

Facebook: another routine scandal

From today’a New York Times:

SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.

Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.

“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.