How Facebook got into trouble, and why it can’t fix itself

My Observer OpEd about the Zuckerberg Apology Tour:

Ponder this … and weep. The United States, theoretically a mature democracy of 327 million souls, is ruled by a 71-year-old unstable narcissist with a serious social media habit. And the lawmakers of this republic have hauled up before them a 34-year-old white male, one Mark Elliot Zuckerberg, the sole and impregnable ruler of a virtual country of about 2.2 billion people who stands accused of unwittingly facilitating the election of said narcissist by allowing Russian agents and other bad actors to exploit the surveillance apparatus of his – Zuckerberg’s – virtual state.

How did we get into this preposterous mess?

Read on

Facebook is just the tip of the iceberg

This morning’s Observer column:

If a picture is worth a thousand words, then a good metaphor must be worth a million. In an insightful blog post published on 23 March, Doc Searls, one of the elder statesman of the web, managed to get both for the price of one. His post was headed by one of those illustrations of an iceberg showing that only the tip is the visible part, while the great bulk of the object lies underwater. In this case, the tip was adorned with the Facebook logo while the submerged mass represented “Every other website making money from tracking-based advertising”. The moral: “Facebook’s Cambridge Analytica problems are nothing compared to what’s coming for all of online publishing.”

The proximate cause of Searls’s essay was encountering a New York Times op-ed piece entitled Facebook’s Surveillance Machine by Zeynep Tufekci. It wasn’t the (unexceptional) content of the article that interested Searls, however, but what his ad-blocking software told him about the Times page in which the essay appeared. The software had detected no fewer than 13 hidden trackers on the page. (I’ve just checked and my Ghostery plug-in has detected 19.)

Read on

The ethics of working for surveillance capitalists

This morning’s Observer column:

In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of [Leo] Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.

Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?

These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront…

Read on

Why Facebook can’t change

My €0.02-worth on the bigger story behind the Cambridge Analytica shenanigans:

Watching Alexander Nix and his Cambridge Analytica henchmen bragging on Channel 4 News about their impressive repertoire of dirty tricks, the character who came irresistibly to mind was Gordon Liddy. Readers with long memories will recall him as the guy who ran the “White House Plumbers” during the presidency of Richard Nixon. Liddy directed the Watergate burglary in June 1972, detection of which started the long chain of events that eventually led to Nixon’s resignation two years later. For his pains, Liddy spent more than four years in jail, but went on to build a second career as a talk-show host and D-list celebrity. Reflecting on this, one wonders what job opportunities – other than those of pantomime villain and Savile Row mannequin – will now be available to Mr Nix.

The investigations into the company by Carole Cadwalladr, in the Observer, reveal that in every respect save one important one, CA looks like a standard-issue psychological warfare outfit of the kind retained by political parties – and sometimes national security services – since time immemorial. It did, however, have one unique selling proposition, namely its ability to offer “psychographic” services: voter-targeting strategies allegedly derived by analysing the personal data of more than 50 million US users of Facebook.

The story of how those data made the journey from Facebook’s servers to Cambridge Analytica’s is now widely known. But it is also widely misunderstood…

Read on

Why you can’t believe what you see (or hear)

This morning’s Observer column:

When John F Kennedy was assassinated in Dallas on 22 November 1963, he was on his way to deliver a speech to the assembled worthies of the city. A copy of his script for the ill-fated oration was later presented by Lyndon Johnson to Stanley Marcus, head of the department store chain Neiman Marcus, whose daughter was in the expectant audience that day.

The text has long been available on the internet and it makes for poignant reading, not just because of what happened at Dealey Plaza that day, but because large chunks of it look eerily prescient in the age of Trump. JFK was a terrific public speaker who employed superb speechwriters (especially Theodore Sorensen). His speeches were invariably elegant and memorable: he had a great eye for a good phrase, and his delivery was usually faultless. So his audience in Dallas knew that they were in for a treat – until Lee Harvey Oswald terminated the dream.

Last week, 55 years on, we finally got to hear what Kennedy’s audience might have heard…

Read on

In surveillance capitalism, extremism is good for business

This morning’s Observer column:

Zeynep Tufecki is one of the shrewdest writers on technology around. A while back, when researching an article on why (and how) Donald Trump appealed to those who supported him, she needed some direct quotes from the man himself and so turned to YouTube, which has a useful archive of videos of his campaign rallies. She then noticed something interesting. “YouTube started to recommend and ‘autoplay’ videos for me,” she wrote, “that featured white supremacist rants, Holocaust denials and other disturbing content.”

Since Tufecki was not in the habit of watching far-right fare on YouTube, she wondered if this was an exclusively rightwing phenomenon. So she created another YouTube account and started watching Hillary Clinton’s and Bernie Sanders’s campaign videos, following the accompanying links suggested by YouTube’s “recommender” algorithm. “Before long,” she reported, “I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of 11 September. As with the Trump videos, YouTube was recommending content that was more and more extreme.”

Read on

How to stay sane on Twitter: ignore retweets

This morning’s Observer column:

When Twitter first broke cover in July 2006, the initial reaction in the non-geek community was derisive incredulity. First of all, there was the ludicrous idea of a “tweet” – not to mention the metaphor of “twittering”, which, after all, is what small birds do. Besides, what could one usefully say in 140 characters? To the average retired colonel (AKA Daily Telegraph reader), Twitter summed up the bird-brained frivolity of the internet era, providing further evidence that the world was going to the dogs.

And now? It turns out that the aforementioned colonel might have been right. For one of the things you can do with a tweet is declare nuclear war. Another thing you can do with Twitter is to bypass the mainstream media, ignore the opinion polls, spread lies and fake news without let or hindrance and get yourself elected president of the United States.

How did it come to this?

Read on

Fixing the future?

My Observer review of Andrew Keen’s How to Fix the Future: Staying Human in the Digital Age:

Many years ago the cultural critic Neil Postman predicted that the future of humanity lay somewhere in the area between the dystopian nightmares of two English writers – George Orwell and Aldous Huxley. Orwell believed that we would be destroyed by the things we fear – surveillance and thought-control; Huxley thought that our undoing would be the things that delight us – that our rulers would twig that entertainment is more efficient than coercion as a means of social control.

Then we invented the internet, a technology that – it turned out – gave us both nightmares at once: comprehensive surveillance by states and corporations on the one hand; and, on the other, a strange kind of passive addiction to devices, apps and services which, like the drug soma in Huxley’s Brave New World, possess “all the advantages of Christianity and alcohol and none of their defects”.

The great irony, of course, is that not all of this was inevitable…

Read on

What happens in China stays in China. Ask Apple

This morning’s Observer column:

Here’s your starter for 10. Question: Apple’s website contains the following bold declaration: “At Apple we believe privacy is a fundamental human right.” What ancient English adage does this bring to mind? Answer: “Fine words butter no parsnips.” In other words, what matters is not what you say, but what you do.

What brings this to mind is the announcement that from now on, iCloud data generated by Apple users with a mainland Chinese account will be stored and managed by a Chinese data management firm – Guizhou-Cloud Big Data (GCBD). “With effect from 28 February 2018,” the notice reads, “iCloud services associated with your Apple ID will be operated by GCBD. Use of these services and all the data you store with iCloud – including photos, videos, documents and backups – will be subject to the terms and conditions of iCloud operated by GCBD.”

Read on

Managing the future that’s already here

This morning’s Observer column:

As the science fiction novelist William Gibson famously observed: “The future is already here – it’s just not very evenly distributed.” I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat (or promise, depending on your point of view) of “superintelligent” machines, ie ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them. The likelihood (or mirage) of such machines still remains a distant prospect, a point made by the leading AI researcher Andrew Ng, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.

That seems about right to me…

Read on