How Facebook got into trouble, and why it can’t fix itself

My Observer OpEd about the Zuckerberg Apology Tour:

Ponder this … and weep. The United States, theoretically a mature democracy of 327 million souls, is ruled by a 71-year-old unstable narcissist with a serious social media habit. And the lawmakers of this republic have hauled up before them a 34-year-old white male, one Mark Elliot Zuckerberg, the sole and impregnable ruler of a virtual country of about 2.2 billion people who stands accused of unwittingly facilitating the election of said narcissist by allowing Russian agents and other bad actors to exploit the surveillance apparatus of his – Zuckerberg’s – virtual state.

How did we get into this preposterous mess?

Read on

Fixing Facebook: the only two options by a guy who knows how the sausage is made

James Fallows quotes from a fascinating email exchange he had with his friend Michael Jones, who used to work at Google (he was the company’s Chief Technology Advocate and later a key figure in the evolution of Google Earth):

So, how might FB fix itself? What might government regulators seek? What could make FaceBook likable? It is very simple. There are just two choices:

a. FB stays in its send-your-PII1-to-their-customers business, and then must be regulated and the customers validated precisely as AXCIOM and EXPERIAN in the credit world or doctors and hospitals in the HIPPA healthcare world; or,

b. FB joins Google and ALL OTHER WEB ADVERTISERS in keeping PII private, never letting it out, and anonymously connecting advertisers with its users for their mutual benefit.

I don’t get a vote, but I like (b) and see that as the right path for civil society. There is no way that choice (a) is not a loathsome and destructive force in all things—in my personal opinion it seems that making people’s pillow-talk into a marketing weapon is indeed a form of evil.

This is why I never use Facebook; I know how the sausage is made.


  1. PII = Personally Identifiable Information 

Why Zuckerberg is safe

From Nils Pratley in the Guardian:

Facebook’s board has heard the calls for the appointment of an independent chair, from New York City’s pension fund for example, and decided to ignore them.

In doing so, the board seems to have accepted Zuckerberg’s bizarrely loose version of accountability. Allowing the data of up to 87 million people to be “inappropriately shared” with Cambridge Analytica was “my responsibility”, he said in answer to a later question. It was also a “huge mistake” not to focus on abuse of data more generally. But, hey, “life is about learning from the mistakes and figuring out what you need to do to move forward”.

This breezy I-promise-to-do-better mantra would be understandable if offered by a school child who had fluffed an exam. But Zuckerberg is running the world’s eighth largest company and $50bn has just been removed from its stock market value in a scandal that, aside from raising deep questions about personal privacy and social media’s influence on democracy, may provoke a regulatory backlash.

In these circumstances, why wouldn’t a board ask whether it has the right governance structure? The motivation would be self-interest. First, there is a need to ensure that the company isn’t run entirely at the whim of a chief executive who is plainly a technological whizz but admits he failed to grasp Facebook’s responsibilities as the number of users exploded to 2 billion. Second, outsiders, including users, advertisers and politicians, want reassurance that Facebook has basic checks and balances in its boardroom.

The lack of interest in governance reform is explained, of course, by the fact that Zuckerberg has a stranglehold over Facebook’s voting shares. His economic interest is 16% but he has 60% of the votes and thus, for practical purposes, can’t easily be shifted from either of his roles…

QED.

This is the flip side of the determination of some tech founders to insulate themselves from the quarterly whims of Wall Street. The Google boys have the same arrangement. Given the malign short-termism of Wall St and the doctrine of maximising shareholder value, this might have seemed sensible or even enlightened at one time. Now it looks like bad corporate governance.

The ethics of working for surveillance capitalists

This morning’s Observer column:

In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of [Leo] Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.

Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?

These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront…

Read on

How Facebook thinks

Revealing leak of an internal memo by one of the company’s senior executives, sent on June 18, 2916. Here’s an excerpt:

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

Says it all, really. Worth reading in full. Needless to say, Zuck ‘disagrees’ with it. Which brings the wonderful Mandy Rice-Davies to mind.

Why Facebook can’t change

My €0.02-worth on the bigger story behind the Cambridge Analytica shenanigans:

Watching Alexander Nix and his Cambridge Analytica henchmen bragging on Channel 4 News about their impressive repertoire of dirty tricks, the character who came irresistibly to mind was Gordon Liddy. Readers with long memories will recall him as the guy who ran the “White House Plumbers” during the presidency of Richard Nixon. Liddy directed the Watergate burglary in June 1972, detection of which started the long chain of events that eventually led to Nixon’s resignation two years later. For his pains, Liddy spent more than four years in jail, but went on to build a second career as a talk-show host and D-list celebrity. Reflecting on this, one wonders what job opportunities – other than those of pantomime villain and Savile Row mannequin – will now be available to Mr Nix.

The investigations into the company by Carole Cadwalladr, in the Observer, reveal that in every respect save one important one, CA looks like a standard-issue psychological warfare outfit of the kind retained by political parties – and sometimes national security services – since time immemorial. It did, however, have one unique selling proposition, namely its ability to offer “psychographic” services: voter-targeting strategies allegedly derived by analysing the personal data of more than 50 million US users of Facebook.

The story of how those data made the journey from Facebook’s servers to Cambridge Analytica’s is now widely known. But it is also widely misunderstood…

Read on

Facebook’s sudden attack of modesty

One of the most illuminating things you can do as a researcher is to go into Facebook not as a schmuck (i.e. user) but as an advertiser — just like your average Russian agent. Upon entering, you quickly begin to appreciate the amazing ingenuity and comprehensiveness of the machine that Zuckerberg & Co have constructed. It’s utterly brilliant, with a great user interface and lots of automated advice and help for choosing your targeted audience.

When doing this a while back — a few months after Trump’s election — I noticed that there was a list of case studies of different industries showing how effective a given targeting strategy could be in a particular application. One of those ‘industries’ was “Government and Politics” and among the case studies was a story of how a Facebook campaign had proved instrumental in helping a congressional candidate to win against considerable odds. I meant to grab some screenshots of this uplifting tale, but of course forget to do so. When I went back later, the case study had, well, disappeared.

Luckily, someone else had the presence of mind to grab a screenshot. The Intercept, bless it, has the before-and-after comparison shown in the image above. They are Facebook screenshots from (left) June 2017 and (right) March 2018.

Interesting, ne c’est pas?

What Facebook is for

From the Columbia Journalism Review:

Digital-journalism veteran David Cohn has argued that the network’s main purpose is not information so much as it is identity, and the construction by users of a public identity that matches the group they wish to belong to. This is why fake news is so powerful.

“The headline isn’t meant to inform somebody about the world,” wrote Cohn, a senior director at Advance Publications, which owns Condé Nast and Reddit. “The headline is a tool to be used by a person to inform others about who they are. ‘This is me,’ they say when they share that headline. ‘This is what I believe. This shows what tribe I belong to.’ It is virtue signaling.”

Twitter suffers from a similar problem, in the sense that many users seem to see their posts as a way of displaying (or arguing for) their beliefs rather than a way of exchanging verifiable news. But Facebook’s role in the spread of misinformation is orders of magnitude larger than Twitter’s: 2 billion monthly users versus 330 million.

Theresa May’s pious hopes for Facebook

This morning’s Observer column:

It has taken an age, but at last politicians seem to be waking up to the societal problems posed by the dominance of certain tech firms – notably Facebook, Twitter and Google – and in particular the way they are allowing their users to pollute the public sphere with extremist rhetoric, hate speech, trolling and multipurpose abusiveness.

The latest occupant of the “techlash” bandwagon is Theresa May, who at the time of writing was still the UK’s prime minister…

Read on