How Facebook’s advertising machine enables ‘custom audiences’ that include anti-semites and white supremacists

This is beginning to get routine. I’ve said for some time that if you really want to understand Facebook, then you have to go in as an advertiser (i.e. the real customer) rather than as a mere user. When you do that, you come face-to-face with the company’s amazingly helpful, automated system for helping you to choose the ‘custom audiences’ that you want to — or should be — targeting. A while back, Politico did a memorable experiment on these lines. Now The Intercept has done the same:

Earlier this week, The Intercept was able to select “white genocide conspiracy theory” as a pre-defined “detailed targeting” criterion on the social network to promote two articles to an interest group that Facebook pegged at 168,000 users large and defined as “people who have expressed an interest or like pages related to White genocide conspiracy theory.” The paid promotion was approved by Facebook’s advertising wing. After we contacted the company for comment, Facebook promptly deleted the targeting category, apologized, and said it should have never existed in the first place.

Our reporting technique was the same as one used by the investigative news outlet ProPublica to report, just over one year ago, that in addition to soccer dads and Ariana Grande fans, “the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or, ‘History of “why jews ruin the world.”’”

Facebook: another routine scandal

From today’a New York Times:

SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.

Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.

“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.

Ireland’s need for a new narrative

Last Friday’s Irish Times carried a piece by Charlie Taylor based on an an interview I gave in which I argued that a country that had built its identity (and prosperity) largely on a policy of being nice to big multi-national companies might need a new narrative now that some of its more welcome guests turn out to be toxic.

Feeding the crocodile

This morning’s Observer column:

Last week, Kevin Systrom and Mike Krieger, the co-founders of Instagram, announced that they were leaving Facebook, where they had worked since Mark Zuckerberg bought their company six years ago. “We’re planning on taking some time off to explore our curiosity and creativity again,” Systrom wrote in a statement on the Instagram blog. “Building new things requires that we step back, understand what inspires us and match that with what the world needs; that’s what we plan to do.”

Quite so. It’s always refreshing when young millionaires decide to spend more time with their money. (Facebook paid $715m for their little outfit when it acquired it; Instagram had 13 employees at the time.) But to those of us who have an unhealthy interest in what goes on at Facebook, the real question about Systrom’s and Krieger’s departure was: what took them so long?

Read on

The political economy of trust

Cambridge University has a new ‘strategic research initiative’ on Trust and Technology, on whose Steering Group I sit.

We’re having a big launch event on September 20, and so I’ve been brooding about the issues surrounding it. Much (most?) of the discussion of trustworthy technology is understandably focussed on the technology itself. But this ignores the fact that the kit doesn’t exist in a vacuum. Digital technology is now part of the everyday lives of [4 billion people] and our dependence on it has raised many questions of trust, reliability, integrity, dependability, equity and control.

Some of these issues undoubtedly stem from technical characteristics of the equipment (think of all the crummy IoT devices coming from China); others stem from the fallibility or ignorance of users (accepting default passwords); but a significant proportion come from the fact that network technology is deployed by global corporations with distinctive business models and strategic interests which are not necessarily aligned with either the public interest or the wellbeing of users.

An interesting current example is provided by VPN (Virtual Private Network) technology. This enables users to create a private network that runs on a public network, thereby enabling them to send and receive data across the public network as if their computing devices were directly connected to the private one. The benefits of VPNs include enhanced functionality, security, and privacy protection and they are a boon for Internet users who need to use ‘free’ public WiFi services in hotels, cafes and public transport. In that sense VPN is a technology that enhances the trustworthiness of open WiFi networks. I use an encrypted VPN all the time on all my devices, and never use an open WiFi network unless I have the VPN switched on.

Earlier this year, Facebook generously offered some of its users Onavo Protect, a VPN developed by an Israeli company that Facebook bought in 2013. A link to the product appeared in the feeds of some US Facebook IOS users under the banner “Protect”. Clicking through on this led to the download link for “Onavo Protect — VPN Security” on the Apple App Store.

The blurb for the App included a promise to “keep you and your data safe when you browse and share information on the web” but omitted to point out that its functionality involved tracking user activity across multiple different applications to learn insights about how Facebook customers use third-party services. Whenever a user of Onavo opened up an app or website, traffic was redirected to Facebook’s servers, which logged the action in a database to allow the company to draw conclusions about internet usage from aggregated data.

Needless to say, close inspection of the Terms and Conditions associated with the app revealed that “Onavo collects your mobile data traffic. This helps us improve and operate the Onavo service by analyzing your use of websites, apps and data”. Whether non-technical users — who presumably imagined that a VPN would provide security and privacy for their browsing (rather than enabling Facebook to track their online activities outside of its ‘walled garden’) understood what this meant is an interesting question. In August 2018, Apple settled the issue — ruling that Onavo Protect violated a part of its developer agreement that prevents apps from using data in ways that go beyond what is directly relevant to the app or to provide advertising, and the app was removed (by Facebook, after discussions with Apple) from the Apple Store. (It is still available for Android users on the Google Play store.)

And the moral? In assessing trustworthiness the technical affordances of the technology are obviously important. But they may be only part of the story. The other part — the political economy of the technology — may actually turn out to be the more important one.

Facebook’s whack-a-mole job will never be completed

Useful NYT report this morning:

Facebook’s fight against disinformation and hate speech will be a topic of discussion on Capitol Hill on Wednesday, when Sheryl Sandberg, the company’s chief operating officer, will join Jack Dorsey, Twitter’s chief executive, to testify in front of the Senate Intelligence Committee.

When it comes to public-facing pages, Ms. Sandberg will have plenty of company actions to cite. Facebook has taken many steps to clean up its platform, including hiring thousands of additional moderators, developing new artificial-intelligence tools and breaking up coordinated influence operations ahead of the midterm elections.

But when it comes to more private forms of communication through the company’s services — like Facebook groups, or the messaging apps WhatsApp and Facebook Messenger — the social network’s progress is less clear. Some experts worry that Facebook’s public cleanup may be pushing more toxic content into these private channels, where it is harder to monitor and moderate.

Misinformation is not against Facebook’s policies unless it leads to violence. But many of the private groups reviewed by The New York Times contained content and behavior that appeared to violate other Facebook rules, such as rules against targeted harassment and hate speech. In one large QAnon group, members planned a coordinated harassment campaign, known as Operation Mayflower, against public figures such as the actor Michael Ian Black, the late-night host Stephen Colbert and the CNN journalist Jim Acosta. In the Infowars group, posts about Muslims and immigrants have drawn threatening comments, including calls to deport, castrate and kill people.

As the social-media exec said to Kara Swisher, “there’s no fixing this”.

Facebook can’t control its users. And it has no incentive to do so

This morning’s Observer column:

Most people I know who use Facebook seem normal. And the uses to which they put the service also seem normal – harmless to the point of banality. So when they see reports of how social media is being used to fuel extremism, violence, racism, intolerance, hatred – even ethnic cleansing – they are puzzled. Who are the people who do things like that? And why doesn’t Facebook stop them?

To answer the first question, let us visit Altena, a town of about 18,000 souls in North Rhine-Westphalia in Germany. After Angela Merkel opened Germany’s doors to refugees, Altena took its quota, like any good German town. When refugees first arrived, so many locals volunteered to help that Anette Wesemann, who runs Altena’s refugee integration centre, couldn’t keep up. She’d find Syrian or Afghan families attended by groups of volunteer German tutors. “It was really moving,” she told a New York Times reporter.

But when Wesemann set up a Facebook page to organise food banks and volunteer activities, things changed…

Read on

Tech companies and ‘fractal irresponsibility’

Nice, insightful essay by Alexis Madrigal. Every new scandal is a fractal representation of the giant services that produce them:

On Tuesday, BuzzFeed published a memo from the outgoing Facebook chief security officer, Alex Stamos, in which he summarizes what the company needs to do to “win back the world’s trust.” And what needs to change is … well, just about everything. Facebook needs to revise “the metrics we measure” and “the goals.” It needs to not ship code more often. It needs to think in new ways “in every process, product, and engineering decision.” It needs to make the user experience more honest and respectful, to collect less data, to keep less data. It needs to “listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” It needs to deprioritize growth and change its relationship with its investors. And finally, Stamos wrote, “We need to be willing to pick sides when there are clear moral or humanitarian issues.” YouTube (and its parent company, Alphabet), Twitter, Snapchat, Instagram, Uber, and every other tech company could probably build a list that contains many of the same critiques and some others.

.People encountering problems online probably don’t think of every single one of these institutional issues when something happens. But they sense that the pattern they are seeing is linked to the fact that these are the most valuable companies in the world, and that they don’t like the world they see through those services or IRL around them. That’s what I mean by fractal irresponsibility: Each problem isn’t just one in a sequence, but part of the same whole.

Interesting also that facebook’s Chief Security Officer has left the company, and that his position is not going to be filled.

The problem with Facebook is Facebook

Kara Swisher has joined the New York Times. Her first column pulls no punches. Sample:

In a post about the latest disinformation campaign, the company said about security challenges: “We face determined, well-funded adversaries who will never give up and are constantly changing tactics. It’s an arms race and we need to constantly improve too.”

The arms race metaphor is a good one, but not for the reasons Facebook intended. Here’s how I see it: Facebook, as well as Twitter and Google’s YouTube, have become the digital arms dealers of the modern age.

All these companies began with a gauzy credo to change the world. But they have done that in ways they did not imagine — by weaponizing pretty much everything that could be weaponized. They have mutated human communication, so that connecting people has too often become about pitting them against one another, and turbocharged that discord to an unprecedented and damaging volume.

They have weaponized social media. They have weaponized the First Amendment. They have weaponized civic discourse. And they have weaponized, most of all, politics.

Lots more where that came from. Worth reading in full.

Zuckerberg’s monster

This morning’s Observer column:

Who – or what – is Mark Zuckerberg? Obviously he’s the founder and CEO of Facebook, which is, in theory, a public company but is in fact his fiefdom, as a casual inspection of the company’s SEC filings confirms. They show that his ownership of the controlling shares means that he can do anything he likes, including selling the company against the wishes of all the other shareholders combined.

But the fact that Zuck wields autocratic power over a huge corporation doesn’t quite get the measure of him. A better metaphor is that he is the Dr Frankenstein de nos jours. Readers of Mary Shelley’s great 19th-century novel will know the story: of how an ingenious scientist – Dr Victor Frankenstein – creates a grotesque but sentient creature in an unorthodox scientific experiment. Repulsed by the monster he has made, Frankenstein flees, but finds that he cannot escape his creation. In the end, Frankenstein dies of exposure in the Arctic, pursuing the monster who has murdered his bride. We never learn what happened to the creature.

Facebook is Zuckerberg’s monster. Unlike Frankenstein, he is still enamoured of his creation, which has made him richer than Croesus and the undisputed ruler of an empire of 2.2 billion users. It has also given him a great deal of power, together with the responsibilities that go with it. But it’s becoming increasingly clear that his creature is out of control, that he’s uneasy about the power and has few good ideas about how to discharge his responsibilities…

Read on