What can be done about the downsides of the app economy?

Snippet from an interesting interview with Daphne Keller, Director of Intermediary Liability at the Stanford Center for Internet and Society:

So how did Facebook user data get to Cambridge Analytica (CA)?

What happened here was a breach of the developer’s agreement with FB — not some kind of security breach or hacking. GSR did more with the data than the TOS permitted—both in terms of keeping it around and in terms of sharing it with CA. We have no way of knowing whether other developers did the same thing. FB presumably doesn’t know either, but they do (per reporting) have audit rights in their developer agreements, so they, more than anyone, could have identified the problem sooner. And the overall privacy design of FB apps has been an open invitation for developments like this from the beginning. This is a story about an ecosystem full of privacy risk, and the inevitable abuse that resulted. It’s not about a security breach.

Is this a widespread problem among app developers?

Before we rush to easy answers, there is a big picture here that will take a long time to sort through. The whole app economy, including Android and iPhone apps, depends on data sharing. That’s what makes many apps work—from constellation mapping apps that use your location, to chat apps that need your friends’ contact information. Ideally app developers will collect only the data they actually need—they should not get a data firehose. Platforms should have policies to this effect and should give users granular controls over data sharing.

User control is important in part because platform control can have real downsides. Different platforms take more or less aggressive stances in controlling apps. The more controlling a platform is, the more it acts as a chokepoint, preventing users from finding or using particular apps. That has competitive consequences (what if Android’s store didn’t offer non-Google maps apps?). It also has consequences for information access and censorship, as we have seen with Apple removing the NYT app and VPN apps from the app store in China.

For my personal policy preferences, and probably for most people’s, we would have wanted FB to be much more controlling, in terms of denying access to these broad swathes of information. At the same time, the rule can’t be that platforms can’t support apps or share data unless the platform takes full legal responsibility for what the app does. Then we’d have few apps, and incumbent powerful platforms would hold even more power. So, there is a long-complicated policy discussion to be had here. It’s frustrating that we didn’t start it years ago when these apps launched, but hopefully at least we will have it now.

Why Facebook can’t change

My €0.02-worth on the bigger story behind the Cambridge Analytica shenanigans:

Watching Alexander Nix and his Cambridge Analytica henchmen bragging on Channel 4 News about their impressive repertoire of dirty tricks, the character who came irresistibly to mind was Gordon Liddy. Readers with long memories will recall him as the guy who ran the “White House Plumbers” during the presidency of Richard Nixon. Liddy directed the Watergate burglary in June 1972, detection of which started the long chain of events that eventually led to Nixon’s resignation two years later. For his pains, Liddy spent more than four years in jail, but went on to build a second career as a talk-show host and D-list celebrity. Reflecting on this, one wonders what job opportunities – other than those of pantomime villain and Savile Row mannequin – will now be available to Mr Nix.

The investigations into the company by Carole Cadwalladr, in the Observer, reveal that in every respect save one important one, CA looks like a standard-issue psychological warfare outfit of the kind retained by political parties – and sometimes national security services – since time immemorial. It did, however, have one unique selling proposition, namely its ability to offer “psychographic” services: voter-targeting strategies allegedly derived by analysing the personal data of more than 50 million US users of Facebook.

The story of how those data made the journey from Facebook’s servers to Cambridge Analytica’s is now widely known. But it is also widely misunderstood…

Read on

Facebook’s sudden attack of modesty

One of the most illuminating things you can do as a researcher is to go into Facebook not as a schmuck (i.e. user) but as an advertiser — just like your average Russian agent. Upon entering, you quickly begin to appreciate the amazing ingenuity and comprehensiveness of the machine that Zuckerberg & Co have constructed. It’s utterly brilliant, with a great user interface and lots of automated advice and help for choosing your targeted audience.

When doing this a while back — a few months after Trump’s election — I noticed that there was a list of case studies of different industries showing how effective a given targeting strategy could be in a particular application. One of those ‘industries’ was “Government and Politics” and among the case studies was a story of how a Facebook campaign had proved instrumental in helping a congressional candidate to win against considerable odds. I meant to grab some screenshots of this uplifting tale, but of course forget to do so. When I went back later, the case study had, well, disappeared.

Luckily, someone else had the presence of mind to grab a screenshot. The Intercept, bless it, has the before-and-after comparison shown in the image above. They are Facebook screenshots from (left) June 2017 and (right) March 2018.

Interesting, ne c’est pas?

In surveillance capitalism, extremism is good for business

This morning’s Observer column:

Zeynep Tufecki is one of the shrewdest writers on technology around. A while back, when researching an article on why (and how) Donald Trump appealed to those who supported him, she needed some direct quotes from the man himself and so turned to YouTube, which has a useful archive of videos of his campaign rallies. She then noticed something interesting. “YouTube started to recommend and ‘autoplay’ videos for me,” she wrote, “that featured white supremacist rants, Holocaust denials and other disturbing content.”

Since Tufecki was not in the habit of watching far-right fare on YouTube, she wondered if this was an exclusively rightwing phenomenon. So she created another YouTube account and started watching Hillary Clinton’s and Bernie Sanders’s campaign videos, following the accompanying links suggested by YouTube’s “recommender” algorithm. “Before long,” she reported, “I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of 11 September. As with the Trump videos, YouTube was recommending content that was more and more extreme.”

Read on

Facebook’s new gateway drug for kids

This morning’s Observer column:

In one of those coincidences that give irony a bad name, Facebook launched a new service for children at the same time that a moral panic was sweeping the UK about the dangers of children using live-streaming apps that enable anyone to broadcast video directly from a smartphone or a tablet. The BBC showed a scary example of what can happen. A young woman who works as an internet safety campaigner posed as a 14-year-old girl to find out what occurs when a young female goes online using one of these streaming services…

Read on

On not being evil

This morning’s Observer column:

The motto “don’t be evil” has always seemed to me to be a daft mantra for a public company, but for years that was the flag under which Google sailed. It was a heading in the letter that the two founders wrote to the US Securities and Exchange Commission prior to the company’s flotation on the Nasdaq stock market in 2004. “We believe strongly,” Sergey Brin and Larry Page declared, “that in the long term, we will be better served – as shareholders and in all other ways – by a company that does good things for the world even if we forgo some short-term gains. This is an important aspect of our culture and is broadly shared within the company.” Two years ago, when Google morphed into Alphabet – its new parent company – the motto changed. Instead of “don’t be evil” it became “do the right thing”.

Heartwarming, eh? But still a strange motto for a public corporation. I mean to say, what’s “right” in this context? And who decides? Since Google/Alphabet does not get into specifics, let me help them out. The “right thing” is “whatever maximises shareholder value”, because in our crazy neoliberal world that’s what public corporations do. In fact, I suspect that if Google decided that doing the right thing might have an adverse impact on the aforementioned value, then its directors would be sued by activist shareholders for dereliction of their fiduciary duty.

Which brings me to YouTube Kids…

Read on

Facebook’s biggest ethical dilemma: unwillingness to acknowledge that it has one

There are really only two possible explanations for the crisis now beginning to engulf Facebook. One is that the company’s founder was — and perhaps still is — a smart but profoundly naive individual who knows little about the world or about human behaviour. The other is that he is — how shall I put it? — a sociopath, indifferent to what happens to people so long as his empire continues to grow.

I prefer the former explanation, but sometimes one wonders…

Consider Free Basics — the program to bring Internet access to millions of people in poor countries. It works by having Facebook pre-installed on cheap smartphones together with deals with local mobile networks that traffic to the Facebook app will not incur any data charges.

The cynical interpretation of this is that it’s a way of furthering Zuckerberg’s goal of replacing the Internet with Facebook, creating the ultimate global walled garden. The charitable spin is the one Zuckerberg himself put on it — that Free Basics provides a way to connect people who would otherwise never go online.

Either way, the effects were predictable: new users in these countries think that Facebook is the Internet; and Facebook becomes the major channel for news. The NYT has a sobering report on what happened in Myanmar, where Facebook now has millions of users.

“Facebook has become sort of the de facto internet for Myanmar,” said Jes Kaliebe Petersen, chief executive of Phandeeyar, Myanmar’s leading technology hub that helped Facebook create its Burmese-language community standards page. “When people buy their first smartphone, it just comes preinstalled.”

But since the company took no editorial responsibility for what people used its service for, when it transpired that it was being used to stir up ethnic hatred and worse, it seemed unable to spot what was happening. “Facebook”, reports the Times,

has become a breeding ground for hate speech and virulent posts about the Rohingya. And because of Facebook’s design, posts that are shared and liked more frequently get more prominent placement in feeds, favoring highly partisan content in timelines.

Ashin Wirathu, the monk, has hundreds of thousands of followers on Facebook accounts in Burmese and English. His posts include graphic photos and videos of decaying bodies that Ashin Wirathu says are Buddhist victims of Rohingya attacks, or posts denouncing the minority ethnic group or updates that identify them falsely as “Bengali” foreigners.

It’s the same story as everywhere else that Facebook has touched. A company that built a money-making advertising machine which gets its revenues from monetising user activity finds that sometimes that activity is very unsavoury and inhumane. And when this is finally realised, it finds itself caught between a rock and a hard place, unwilling to accept responsibility from the unintended consequences of its wealth-generating machine.

What’s coming in IoS 11

Ah, at last something interesting:

In September, Apple will release new changes to Safari with iOS 11 called “Intelligent Tracking Prevention.” These changes will have large effects on the ad tech industry and create new winners and losers.

In short, the iOS 11 changes will really help the big guys, are neutral to the small guys and significantly hurt the mid-size guys.

Hmmm… Not sure that this will be a boon to the world (c.f. the stuff about helping the big guys). I’ll continue to use my own protective measures.