Fixing Facebook: the only two options by a guy who knows how the sausage is made

James Fallows quotes from a fascinating email exchange he had with his friend Michael Jones, who used to work at Google (he was the company’s Chief Technology Advocate and later a key figure in the evolution of Google Earth):

So, how might FB fix itself? What might government regulators seek? What could make FaceBook likable? It is very simple. There are just two choices:

a. FB stays in its send-your-PII1-to-their-customers business, and then must be regulated and the customers validated precisely as AXCIOM and EXPERIAN in the credit world or doctors and hospitals in the HIPPA healthcare world; or,

b. FB joins Google and ALL OTHER WEB ADVERTISERS in keeping PII private, never letting it out, and anonymously connecting advertisers with its users for their mutual benefit.

I don’t get a vote, but I like (b) and see that as the right path for civil society. There is no way that choice (a) is not a loathsome and destructive force in all things—in my personal opinion it seems that making people’s pillow-talk into a marketing weapon is indeed a form of evil.

This is why I never use Facebook; I know how the sausage is made.


  1. PII = Personally Identifiable Information 

Facebook is just the tip of the iceberg

This morning’s Observer column:

If a picture is worth a thousand words, then a good metaphor must be worth a million. In an insightful blog post published on 23 March, Doc Searls, one of the elder statesman of the web, managed to get both for the price of one. His post was headed by one of those illustrations of an iceberg showing that only the tip is the visible part, while the great bulk of the object lies underwater. In this case, the tip was adorned with the Facebook logo while the submerged mass represented “Every other website making money from tracking-based advertising”. The moral: “Facebook’s Cambridge Analytica problems are nothing compared to what’s coming for all of online publishing.”

The proximate cause of Searls’s essay was encountering a New York Times op-ed piece entitled Facebook’s Surveillance Machine by Zeynep Tufekci. It wasn’t the (unexceptional) content of the article that interested Searls, however, but what his ad-blocking software told him about the Times page in which the essay appeared. The software had detected no fewer than 13 hidden trackers on the page. (I’ve just checked and my Ghostery plug-in has detected 19.)

Read on

“The business model of the Internet is surveillance” contd.

This useful graphic comes from a wonderful post by the redoubtable Doc Searls about the ultimate unsustainability of the business model currently dominating the Web. He starts with a quote from “Facebook’s Surveillance Machine” — a NYT OpEd column by the equally-redoubtable Zeynep Tufecki:

“Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Doc then points out the irony of his Privacy Badger software detecting 13 hidden trackers on the NYT page on which Zeynep’s column appears. (I’ve just checked and Ghostery currently detects 19 trackers on it.)

The point, Doc goes on to say, is that the Times is just doing what every other publication that lives off adtech does: tracking-based advertising. “These publications”,

don’t just open the kimonos of their readers. They bring people’s bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of returning “interest-based” advertising to those same people.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach or experience), and damn little care or control by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data? No one entity, that’s for sure.

Doc points out that on reputable outfits like the New York Times writers like Zeynep have nothing to do with this endemic tracking. In such publications there probably is a functioning “Chinese Wall” between editorial and advertising. Just to drive the point home he looks at Sue Halpern’s piece in the sainted New Yorker on “Cambridge Analytica, Facebook and the Revelations of Open Secrets” and his RedMorph software finds 16 third-party trackers. (On my browser, Ghostery found 18.) The moral is, in a way, obvious: it’s a confirmation of Bruce Schneier’s original observation that “surveillance is the business model of the Internet”. Being a pedant, I would have said “of the Web”, but since many people can’t distinguish between the two, we’ll leave Bruce’s formulation stand.

What can be done about the downsides of the app economy?

Snippet from an interesting interview with Daphne Keller, Director of Intermediary Liability at the Stanford Center for Internet and Society:

So how did Facebook user data get to Cambridge Analytica (CA)?

What happened here was a breach of the developer’s agreement with FB — not some kind of security breach or hacking. GSR did more with the data than the TOS permitted—both in terms of keeping it around and in terms of sharing it with CA. We have no way of knowing whether other developers did the same thing. FB presumably doesn’t know either, but they do (per reporting) have audit rights in their developer agreements, so they, more than anyone, could have identified the problem sooner. And the overall privacy design of FB apps has been an open invitation for developments like this from the beginning. This is a story about an ecosystem full of privacy risk, and the inevitable abuse that resulted. It’s not about a security breach.

Is this a widespread problem among app developers?

Before we rush to easy answers, there is a big picture here that will take a long time to sort through. The whole app economy, including Android and iPhone apps, depends on data sharing. That’s what makes many apps work—from constellation mapping apps that use your location, to chat apps that need your friends’ contact information. Ideally app developers will collect only the data they actually need—they should not get a data firehose. Platforms should have policies to this effect and should give users granular controls over data sharing.

User control is important in part because platform control can have real downsides. Different platforms take more or less aggressive stances in controlling apps. The more controlling a platform is, the more it acts as a chokepoint, preventing users from finding or using particular apps. That has competitive consequences (what if Android’s store didn’t offer non-Google maps apps?). It also has consequences for information access and censorship, as we have seen with Apple removing the NYT app and VPN apps from the app store in China.

For my personal policy preferences, and probably for most people’s, we would have wanted FB to be much more controlling, in terms of denying access to these broad swathes of information. At the same time, the rule can’t be that platforms can’t support apps or share data unless the platform takes full legal responsibility for what the app does. Then we’d have few apps, and incumbent powerful platforms would hold even more power. So, there is a long-complicated policy discussion to be had here. It’s frustrating that we didn’t start it years ago when these apps launched, but hopefully at least we will have it now.

Why Facebook can’t change

My €0.02-worth on the bigger story behind the Cambridge Analytica shenanigans:

Watching Alexander Nix and his Cambridge Analytica henchmen bragging on Channel 4 News about their impressive repertoire of dirty tricks, the character who came irresistibly to mind was Gordon Liddy. Readers with long memories will recall him as the guy who ran the “White House Plumbers” during the presidency of Richard Nixon. Liddy directed the Watergate burglary in June 1972, detection of which started the long chain of events that eventually led to Nixon’s resignation two years later. For his pains, Liddy spent more than four years in jail, but went on to build a second career as a talk-show host and D-list celebrity. Reflecting on this, one wonders what job opportunities – other than those of pantomime villain and Savile Row mannequin – will now be available to Mr Nix.

The investigations into the company by Carole Cadwalladr, in the Observer, reveal that in every respect save one important one, CA looks like a standard-issue psychological warfare outfit of the kind retained by political parties – and sometimes national security services – since time immemorial. It did, however, have one unique selling proposition, namely its ability to offer “psychographic” services: voter-targeting strategies allegedly derived by analysing the personal data of more than 50 million US users of Facebook.

The story of how those data made the journey from Facebook’s servers to Cambridge Analytica’s is now widely known. But it is also widely misunderstood…

Read on

Facebook’s sudden attack of modesty

One of the most illuminating things you can do as a researcher is to go into Facebook not as a schmuck (i.e. user) but as an advertiser — just like your average Russian agent. Upon entering, you quickly begin to appreciate the amazing ingenuity and comprehensiveness of the machine that Zuckerberg & Co have constructed. It’s utterly brilliant, with a great user interface and lots of automated advice and help for choosing your targeted audience.

When doing this a while back — a few months after Trump’s election — I noticed that there was a list of case studies of different industries showing how effective a given targeting strategy could be in a particular application. One of those ‘industries’ was “Government and Politics” and among the case studies was a story of how a Facebook campaign had proved instrumental in helping a congressional candidate to win against considerable odds. I meant to grab some screenshots of this uplifting tale, but of course forget to do so. When I went back later, the case study had, well, disappeared.

Luckily, someone else had the presence of mind to grab a screenshot. The Intercept, bless it, has the before-and-after comparison shown in the image above. They are Facebook screenshots from (left) June 2017 and (right) March 2018.

Interesting, ne c’est pas?

In surveillance capitalism, extremism is good for business

This morning’s Observer column:

Zeynep Tufecki is one of the shrewdest writers on technology around. A while back, when researching an article on why (and how) Donald Trump appealed to those who supported him, she needed some direct quotes from the man himself and so turned to YouTube, which has a useful archive of videos of his campaign rallies. She then noticed something interesting. “YouTube started to recommend and ‘autoplay’ videos for me,” she wrote, “that featured white supremacist rants, Holocaust denials and other disturbing content.”

Since Tufecki was not in the habit of watching far-right fare on YouTube, she wondered if this was an exclusively rightwing phenomenon. So she created another YouTube account and started watching Hillary Clinton’s and Bernie Sanders’s campaign videos, following the accompanying links suggested by YouTube’s “recommender” algorithm. “Before long,” she reported, “I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of 11 September. As with the Trump videos, YouTube was recommending content that was more and more extreme.”

Read on

Facebook’s new gateway drug for kids

This morning’s Observer column:

In one of those coincidences that give irony a bad name, Facebook launched a new service for children at the same time that a moral panic was sweeping the UK about the dangers of children using live-streaming apps that enable anyone to broadcast video directly from a smartphone or a tablet. The BBC showed a scary example of what can happen. A young woman who works as an internet safety campaigner posed as a 14-year-old girl to find out what occurs when a young female goes online using one of these streaming services…

Read on

On not being evil

This morning’s Observer column:

The motto “don’t be evil” has always seemed to me to be a daft mantra for a public company, but for years that was the flag under which Google sailed. It was a heading in the letter that the two founders wrote to the US Securities and Exchange Commission prior to the company’s flotation on the Nasdaq stock market in 2004. “We believe strongly,” Sergey Brin and Larry Page declared, “that in the long term, we will be better served – as shareholders and in all other ways – by a company that does good things for the world even if we forgo some short-term gains. This is an important aspect of our culture and is broadly shared within the company.” Two years ago, when Google morphed into Alphabet – its new parent company – the motto changed. Instead of “don’t be evil” it became “do the right thing”.

Heartwarming, eh? But still a strange motto for a public corporation. I mean to say, what’s “right” in this context? And who decides? Since Google/Alphabet does not get into specifics, let me help them out. The “right thing” is “whatever maximises shareholder value”, because in our crazy neoliberal world that’s what public corporations do. In fact, I suspect that if Google decided that doing the right thing might have an adverse impact on the aforementioned value, then its directors would be sued by activist shareholders for dereliction of their fiduciary duty.

Which brings me to YouTube Kids…

Read on