The political economy of trust

Cambridge University has a new ‘strategic research initiative’ on Trust and Technology, on whose Steering Group I sit.

We’re having a big launch event on September 20, and so I’ve been brooding about the issues surrounding it. Much (most?) of the discussion of trustworthy technology is understandably focussed on the technology itself. But this ignores the fact that the kit doesn’t exist in a vacuum. Digital technology is now part of the everyday lives of [4 billion people] and our dependence on it has raised many questions of trust, reliability, integrity, dependability, equity and control.

Some of these issues undoubtedly stem from technical characteristics of the equipment (think of all the crummy IoT devices coming from China); others stem from the fallibility or ignorance of users (accepting default passwords); but a significant proportion come from the fact that network technology is deployed by global corporations with distinctive business models and strategic interests which are not necessarily aligned with either the public interest or the wellbeing of users.

An interesting current example is provided by VPN (Virtual Private Network) technology. This enables users to create a private network that runs on a public network, thereby enabling them to send and receive data across the public network as if their computing devices were directly connected to the private one. The benefits of VPNs include enhanced functionality, security, and privacy protection and they are a boon for Internet users who need to use ‘free’ public WiFi services in hotels, cafes and public transport. In that sense VPN is a technology that enhances the trustworthiness of open WiFi networks. I use an encrypted VPN all the time on all my devices, and never use an open WiFi network unless I have the VPN switched on.

Earlier this year, Facebook generously offered some of its users Onavo Protect, a VPN developed by an Israeli company that Facebook bought in 2013. A link to the product appeared in the feeds of some US Facebook IOS users under the banner “Protect”. Clicking through on this led to the download link for “Onavo Protect — VPN Security” on the Apple App Store.

The blurb for the App included a promise to “keep you and your data safe when you browse and share information on the web” but omitted to point out that its functionality involved tracking user activity across multiple different applications to learn insights about how Facebook customers use third-party services. Whenever a user of Onavo opened up an app or website, traffic was redirected to Facebook’s servers, which logged the action in a database to allow the company to draw conclusions about internet usage from aggregated data.

Needless to say, close inspection of the Terms and Conditions associated with the app revealed that “Onavo collects your mobile data traffic. This helps us improve and operate the Onavo service by analyzing your use of websites, apps and data”. Whether non-technical users — who presumably imagined that a VPN would provide security and privacy for their browsing (rather than enabling Facebook to track their online activities outside of its ‘walled garden’) understood what this meant is an interesting question. In August 2018, Apple settled the issue — ruling that Onavo Protect violated a part of its developer agreement that prevents apps from using data in ways that go beyond what is directly relevant to the app or to provide advertising, and the app was removed (by Facebook, after discussions with Apple) from the Apple Store. (It is still available for Android users on the Google Play store.)

And the moral? In assessing trustworthiness the technical affordances of the technology are obviously important. But they may be only part of the story. The other part — the political economy of the technology — may actually turn out to be the more important one.

Facebook’s whack-a-mole job will never be completed

Useful NYT report this morning:

Facebook’s fight against disinformation and hate speech will be a topic of discussion on Capitol Hill on Wednesday, when Sheryl Sandberg, the company’s chief operating officer, will join Jack Dorsey, Twitter’s chief executive, to testify in front of the Senate Intelligence Committee.

When it comes to public-facing pages, Ms. Sandberg will have plenty of company actions to cite. Facebook has taken many steps to clean up its platform, including hiring thousands of additional moderators, developing new artificial-intelligence tools and breaking up coordinated influence operations ahead of the midterm elections.

But when it comes to more private forms of communication through the company’s services — like Facebook groups, or the messaging apps WhatsApp and Facebook Messenger — the social network’s progress is less clear. Some experts worry that Facebook’s public cleanup may be pushing more toxic content into these private channels, where it is harder to monitor and moderate.

Misinformation is not against Facebook’s policies unless it leads to violence. But many of the private groups reviewed by The New York Times contained content and behavior that appeared to violate other Facebook rules, such as rules against targeted harassment and hate speech. In one large QAnon group, members planned a coordinated harassment campaign, known as Operation Mayflower, against public figures such as the actor Michael Ian Black, the late-night host Stephen Colbert and the CNN journalist Jim Acosta. In the Infowars group, posts about Muslims and immigrants have drawn threatening comments, including calls to deport, castrate and kill people.

As the social-media exec said to Kara Swisher, “there’s no fixing this”.

Facebook can’t control its users. And it has no incentive to do so

This morning’s Observer column:

Most people I know who use Facebook seem normal. And the uses to which they put the service also seem normal – harmless to the point of banality. So when they see reports of how social media is being used to fuel extremism, violence, racism, intolerance, hatred – even ethnic cleansing – they are puzzled. Who are the people who do things like that? And why doesn’t Facebook stop them?

To answer the first question, let us visit Altena, a town of about 18,000 souls in North Rhine-Westphalia in Germany. After Angela Merkel opened Germany’s doors to refugees, Altena took its quota, like any good German town. When refugees first arrived, so many locals volunteered to help that Anette Wesemann, who runs Altena’s refugee integration centre, couldn’t keep up. She’d find Syrian or Afghan families attended by groups of volunteer German tutors. “It was really moving,” she told a New York Times reporter.

But when Wesemann set up a Facebook page to organise food banks and volunteer activities, things changed…

Read on

Tech companies and ‘fractal irresponsibility’

Nice, insightful essay by Alexis Madrigal. Every new scandal is a fractal representation of the giant services that produce them:

On Tuesday, BuzzFeed published a memo from the outgoing Facebook chief security officer, Alex Stamos, in which he summarizes what the company needs to do to “win back the world’s trust.” And what needs to change is … well, just about everything. Facebook needs to revise “the metrics we measure” and “the goals.” It needs to not ship code more often. It needs to think in new ways “in every process, product, and engineering decision.” It needs to make the user experience more honest and respectful, to collect less data, to keep less data. It needs to “listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” It needs to deprioritize growth and change its relationship with its investors. And finally, Stamos wrote, “We need to be willing to pick sides when there are clear moral or humanitarian issues.” YouTube (and its parent company, Alphabet), Twitter, Snapchat, Instagram, Uber, and every other tech company could probably build a list that contains many of the same critiques and some others.

.People encountering problems online probably don’t think of every single one of these institutional issues when something happens. But they sense that the pattern they are seeing is linked to the fact that these are the most valuable companies in the world, and that they don’t like the world they see through those services or IRL around them. That’s what I mean by fractal irresponsibility: Each problem isn’t just one in a sequence, but part of the same whole.

Interesting also that facebook’s Chief Security Officer has left the company, and that his position is not going to be filled.

The problem with Facebook is Facebook

Kara Swisher has joined the New York Times. Her first column pulls no punches. Sample:

In a post about the latest disinformation campaign, the company said about security challenges: “We face determined, well-funded adversaries who will never give up and are constantly changing tactics. It’s an arms race and we need to constantly improve too.”

The arms race metaphor is a good one, but not for the reasons Facebook intended. Here’s how I see it: Facebook, as well as Twitter and Google’s YouTube, have become the digital arms dealers of the modern age.

All these companies began with a gauzy credo to change the world. But they have done that in ways they did not imagine — by weaponizing pretty much everything that could be weaponized. They have mutated human communication, so that connecting people has too often become about pitting them against one another, and turbocharged that discord to an unprecedented and damaging volume.

They have weaponized social media. They have weaponized the First Amendment. They have weaponized civic discourse. And they have weaponized, most of all, politics.

Lots more where that came from. Worth reading in full.

Zuckerberg’s monster

This morning’s Observer column:

Who – or what – is Mark Zuckerberg? Obviously he’s the founder and CEO of Facebook, which is, in theory, a public company but is in fact his fiefdom, as a casual inspection of the company’s SEC filings confirms. They show that his ownership of the controlling shares means that he can do anything he likes, including selling the company against the wishes of all the other shareholders combined.

But the fact that Zuck wields autocratic power over a huge corporation doesn’t quite get the measure of him. A better metaphor is that he is the Dr Frankenstein de nos jours. Readers of Mary Shelley’s great 19th-century novel will know the story: of how an ingenious scientist – Dr Victor Frankenstein – creates a grotesque but sentient creature in an unorthodox scientific experiment. Repulsed by the monster he has made, Frankenstein flees, but finds that he cannot escape his creation. In the end, Frankenstein dies of exposure in the Arctic, pursuing the monster who has murdered his bride. We never learn what happened to the creature.

Facebook is Zuckerberg’s monster. Unlike Frankenstein, he is still enamoured of his creation, which has made him richer than Croesus and the undisputed ruler of an empire of 2.2 billion users. It has also given him a great deal of power, together with the responsibilities that go with it. But it’s becoming increasingly clear that his creature is out of control, that he’s uneasy about the power and has few good ideas about how to discharge his responsibilities…

Read on

Could reality be catching up with Facebook?

This — from Bloomberg — is interesting:

Facebook Inc. hasn’t been able to do anything right — except when it comes to making money, where it could do nothing wrong.

That changed on Wednesday, when the company posted disappointing growth in revenue, profits and the number of visitors to its digital hangouts. Results are still stellar by the standards of most companies, but investors in fast-growing technology companies react badly when their high hopes aren’t met, as Netflix recently found out. Facebook hit a record stock price on Wednesday, but after the release of its financial results, its shares dropped a stunning 24 percent in after-hours trading.

And no wonder. The company’s financial results, and especially its glimpse into a more pessimistic financial future, were utter disaster for investors. If what the company predicts comes to pass, the internet’s best combination of fast revenue growth and plump profit margins is dead. All at once, it seemed, reality finally caught up to Facebook.

Well, among other things (including plans for its very own earth-orbiting satellites), those 20,000+ content ‘moderators’ have to be paid for somehow.

So what’s the problem with Facebook?

Interesting NYT piece by Kevin Roose in which he points out that the key question about regulating Facebook is not that lawmakers know very little about how it works, but whether they have the political will to regulate it. My hunch is that they don’t, but if they did then the first thing to do would be fix on some clear ideas about what’s wrong with the company.

Here’s the list of possibilities cited by Roose:

  • Is it that Facebook is too cavalier about sharing user data with outside organizations?
  • Is it that Facebook collects too much data about users in the first place?
  • Is it that Facebook is promoting addictive messaging products to children?
  • Is it that Facebook’s news feed is polarizing society, pushing people to ideological fringes?
  • Is it that Facebook is too easy for political operatives to exploit, or that it does not do enough to keep false news and hate speech off users’ feeds?
  • Is it that Facebook is simply too big, or a monopoly that needs to be broken up?

How about: all of the above?

Google, Facebook and the power to nudge users

This morning’s Observer column:

Thaler and Sunstein describe their philosophy as “libertarian paternalism”. What it involves is a design approach known as “choice architecture” and in particular controlling the default settings at any point where a person has to make a decision.

Funnily enough, this is something that the tech industry has known for decades. In the mid-1990s, for example, Microsoft – which had belatedly realised the significance of the web – set out to destroy Netscape, the first company to create a proper web browser. Microsoft did this by installing its own browser – Internet Explorer – on every copy of the Windows operating system. Users were free to install Netscape, of course, but Microsoft relied on the fact that very few people ever change default settings. For this abuse of its monopoly power, Microsoft was landed with an antitrust suit that nearly resulted in its breakup. But it did succeed in destroying Netscape.

When the EU introduced its General Data Protection Regulation (GDPR) – which seeks to give internet users significant control over uses of their personal data – many of us wondered how data-vampires like Google and Facebook would deal with the implicit threat to their core businesses. Now that the regulation is in force, we’re beginning to find out: they’re using choice architecture to make it as difficult as possible for users to do what is best for them while making it easy to do what is good for the companies.

We know this courtesy of a very useful 43-page report just out from the Norwegian Consumer Council, an organisation funded by the Norwegian government…

Read on