Hypocrisy on stilts

Terrific FT column by Rana Foroohar. Sample:

If the Facebook revelations prove anything, they show that its top leadership is not liberal, but selfishly libertarian. Political ideals will not get in the way of the company’s efforts to protect its share price. This was made clear by Facebook’s hiring of a rightwing consulting group, Definers Public Affairs, to try and spread misinformation about industry rivals to reporters and to demonise George Soros, who had a pipe bomb delivered to his home. At Davos in January, the billionaire investor made a speech questioning the power of platform technology companies.

Think about that for a minute. This is a company that was so desperate to protect its top leadership and its business model that it hired a shadowy PR firm that used anti-Semitism as a political weapon. Patrick Gaspard, president of the Open Society Foundations, founded by Mr Soros, wrote in a letter last week to Ms Sandberg: “The notion that your company, at your direction”, tried to “discredit people exercising their First Amendment rights to protest Facebook’s role in disseminating vile propaganda is frankly astonishing to me”.

I couldn’t agree more. Ms Sandberg says she didn’t know about the tactics being used by Definers Public Affairs. Mr Zuckerberg says that while he understands “DC type firms” might use such tactics, he doesn’t want them associated with Facebook and has cancelled its contract with Definers.

The irony of that statement could be cut with a knife. Silicon Valley companies are among the nation’s biggest corporate lobbyists. They’ve funded many academics doing research on topics of interest to them, and have made large donations to many powerful politicians…

There is a strange consistency in the cant coming from Zuckerberg and Sandberg as they try to respond to the NYT‘s exhumation of their attempts to avoid responsibility for Facebook’s malignancy. It’s what PR flacks call “plausible deniability”. Time and again, the despicable or ethically-dubious actions taken by Facebook apparently come as a complete surprise to the two at the very top of the company — Zuckerberg and Sandberg. I’m afraid that particular cover story is beginning to look threadbare.

Sheryl Sandberg: now visible in her true colours

Seems that I’m not the only one who’s been thinking about Sheryl Sandberg’s malign role in Facebook’s cynical campaign to evade responsibility for the damage the company is doing. The NYT investigation of Facebook’s campaign to escape the consequences of its actions (and of its business model) highlighted the aggressive role she played in that. Here’s an interesting take on Sandberg from Jessica Crispin in today’s Guardian:

Whether those problems are caused by Russians who sought to sway the 2016 election in favor of Donald Trump or the Myanmar military seeking to cleanse its state of the Rohingya people, Facebook has stubbornly delayed examining its role in geopolitical shifts all over the world. But people have been writing articles about the misdeeds of social media platforms for years, and little oversight, internal reform, or mass exodus of users ever follows.

The newest piece did reveal one thing, however: the vital role COO Sheryl Sandberg played in all of this. This is not the story, however, of the one woman bravely speaking truth to power. Nor is it the ethical influence a celebrated feminist leader had on a company concerned primarily with protecting its economic well being and that of its shareholders. Rather, Sandberg yelled at her employee, Facebook’s security chief, for daring to investigate these issues, and then tried to cover up all he had found. Sandberg also played a pivotal role in lobbying top lawmakers in Washington DC to limit unwanted regulation and scrutiny.

Sandberg, of course, became an aspirational heroine among mainstream, self-empowerment feminists with her 2013 book Lean In: Women, Work, and the Will to Lead

In January 2015, when the Davos elite-fest was in full swing, I wrote an Observer column ridiculing the fatuous ‘reports’ Facebook used to issue round that time, asserting that the company’s impact on jobs and prosperity was substantial and very positive. It was all hooey, of course. But on the Sunday morning when the piece was published, Sandberg came up to a friend of mine who is a senior figure in the World Economic Forum (the outfit that runs the Davos event) at breakfast and asked him plaintively: “Why does John Naughton hate us?”

Looks like I was ahead of the pack — for once.

What makes this doubly interesting is that Sandberg reportedly was at one time fantasising about running for President (of the US).

Good news?

Well, well. Maybe we’re — finally — making progress. This from Recode:

Mark Zuckerberg, Sheryl Sandberg and other top Facebook leaders should get ready for increased scrutiny after a damning new investigation shed light on how they stalled, stumbled and plotted through a series of crises over the last two years, including Russian meddling, data sharing and hate speech. The question now: Who does Facebook fire in the aftermath of these revelations? Meanwhile, the difficult past year has taken a toll on employee morale: An internal survey shows that only 52 percent of Facebook staff are optimistic about its future, down from 84 percent of employees last year. It might already be time for a new survey.

Understanding platforms

From an interesting piece by Max Fisher:

We think of any danger as coming from misuse — scammers, hackers, state-sponsored misinformation — but we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses.

Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.

It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.

Zuckerberg’s monster

Here’s an edited version of a chapter I’ve written in a newly-published book – Anti-Social Media: The Impact on Journalism and Society, edited by John Mair, Tor Clark, Neil Fowler, Raymond Snoddy and Richard Tait, Abramis, 2018.

Ponder this: in 2004 a Harvard sophomore named Zuckerberg sits in his dorm room hammering away at a computer keyboard. He’s taking an idea he ‘borrowed’ from two nice-but-dim Harvard undergraduates and writing the computer code needed to turn it into a social-networking site. He borrows $1,000 from his friend Eduardo Saverin and puts the site onto an internet web-hosting service. He calls it ‘The Facebook’.

Fourteen years later, that kid has metamorphosed into the 21st-century embodiment of John D Rockefeller and William Randolph Hearst rolled into one. In the early 20th century, Rockefeller controlled the flow of oil while Hearst controlled the flow of information. In the 21st century Zuckerberg controls the flow of the new oil (data) and the information (because people get much of their news from the platform that he controls). His empire spans more than 2.2bn people, and he exercises absolute control over it — as a passage in the company’s 10-K SEC filing makes clear. It reads, in part…

Read on

How Facebook’s advertising machine enables ‘custom audiences’ that include anti-semites and white supremacists

This is beginning to get routine. I’ve said for some time that if you really want to understand Facebook, then you have to go in as an advertiser (i.e. the real customer) rather than as a mere user. When you do that, you come face-to-face with the company’s amazingly helpful, automated system for helping you to choose the ‘custom audiences’ that you want to — or should be — targeting. A while back, Politico did a memorable experiment on these lines. Now The Intercept has done the same:

Earlier this week, The Intercept was able to select “white genocide conspiracy theory” as a pre-defined “detailed targeting” criterion on the social network to promote two articles to an interest group that Facebook pegged at 168,000 users large and defined as “people who have expressed an interest or like pages related to White genocide conspiracy theory.” The paid promotion was approved by Facebook’s advertising wing. After we contacted the company for comment, Facebook promptly deleted the targeting category, apologized, and said it should have never existed in the first place.

Our reporting technique was the same as one used by the investigative news outlet ProPublica to report, just over one year ago, that in addition to soccer dads and Ariana Grande fans, “the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or, ‘History of “why jews ruin the world.”’”

Facebook: another routine scandal

From today’a New York Times:

SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.

Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.

“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.

Ireland’s need for a new narrative

Last Friday’s Irish Times carried a piece by Charlie Taylor based on an an interview I gave in which I argued that a country that had built its identity (and prosperity) largely on a policy of being nice to big multi-national companies might need a new narrative now that some of its more welcome guests turn out to be toxic.

Feeding the crocodile

This morning’s Observer column:

Last week, Kevin Systrom and Mike Krieger, the co-founders of Instagram, announced that they were leaving Facebook, where they had worked since Mark Zuckerberg bought their company six years ago. “We’re planning on taking some time off to explore our curiosity and creativity again,” Systrom wrote in a statement on the Instagram blog. “Building new things requires that we step back, understand what inspires us and match that with what the world needs; that’s what we plan to do.”

Quite so. It’s always refreshing when young millionaires decide to spend more time with their money. (Facebook paid $715m for their little outfit when it acquired it; Instagram had 13 employees at the time.) But to those of us who have an unhealthy interest in what goes on at Facebook, the real question about Systrom’s and Krieger’s departure was: what took them so long?

Read on

The political economy of trust

Cambridge University has a new ‘strategic research initiative’ on Trust and Technology, on whose Steering Group I sit.

We’re having a big launch event on September 20, and so I’ve been brooding about the issues surrounding it. Much (most?) of the discussion of trustworthy technology is understandably focussed on the technology itself. But this ignores the fact that the kit doesn’t exist in a vacuum. Digital technology is now part of the everyday lives of [4 billion people] and our dependence on it has raised many questions of trust, reliability, integrity, dependability, equity and control.

Some of these issues undoubtedly stem from technical characteristics of the equipment (think of all the crummy IoT devices coming from China); others stem from the fallibility or ignorance of users (accepting default passwords); but a significant proportion come from the fact that network technology is deployed by global corporations with distinctive business models and strategic interests which are not necessarily aligned with either the public interest or the wellbeing of users.

An interesting current example is provided by VPN (Virtual Private Network) technology. This enables users to create a private network that runs on a public network, thereby enabling them to send and receive data across the public network as if their computing devices were directly connected to the private one. The benefits of VPNs include enhanced functionality, security, and privacy protection and they are a boon for Internet users who need to use ‘free’ public WiFi services in hotels, cafes and public transport. In that sense VPN is a technology that enhances the trustworthiness of open WiFi networks. I use an encrypted VPN all the time on all my devices, and never use an open WiFi network unless I have the VPN switched on.

Earlier this year, Facebook generously offered some of its users Onavo Protect, a VPN developed by an Israeli company that Facebook bought in 2013. A link to the product appeared in the feeds of some US Facebook IOS users under the banner “Protect”. Clicking through on this led to the download link for “Onavo Protect — VPN Security” on the Apple App Store.

The blurb for the App included a promise to “keep you and your data safe when you browse and share information on the web” but omitted to point out that its functionality involved tracking user activity across multiple different applications to learn insights about how Facebook customers use third-party services. Whenever a user of Onavo opened up an app or website, traffic was redirected to Facebook’s servers, which logged the action in a database to allow the company to draw conclusions about internet usage from aggregated data.

Needless to say, close inspection of the Terms and Conditions associated with the app revealed that “Onavo collects your mobile data traffic. This helps us improve and operate the Onavo service by analyzing your use of websites, apps and data”. Whether non-technical users — who presumably imagined that a VPN would provide security and privacy for their browsing (rather than enabling Facebook to track their online activities outside of its ‘walled garden’) understood what this meant is an interesting question. In August 2018, Apple settled the issue — ruling that Onavo Protect violated a part of its developer agreement that prevents apps from using data in ways that go beyond what is directly relevant to the app or to provide advertising, and the app was removed (by Facebook, after discussions with Apple) from the Apple Store. (It is still available for Android users on the Google Play store.)

And the moral? In assessing trustworthiness the technical affordances of the technology are obviously important. But they may be only part of the story. The other part — the political economy of the technology — may actually turn out to be the more important one.