Why Facebook can’t change

My €0.02-worth on the bigger story behind the Cambridge Analytica shenanigans:

Watching Alexander Nix and his Cambridge Analytica henchmen bragging on Channel 4 News about their impressive repertoire of dirty tricks, the character who came irresistibly to mind was Gordon Liddy. Readers with long memories will recall him as the guy who ran the “White House Plumbers” during the presidency of Richard Nixon. Liddy directed the Watergate burglary in June 1972, detection of which started the long chain of events that eventually led to Nixon’s resignation two years later. For his pains, Liddy spent more than four years in jail, but went on to build a second career as a talk-show host and D-list celebrity. Reflecting on this, one wonders what job opportunities – other than those of pantomime villain and Savile Row mannequin – will now be available to Mr Nix.

The investigations into the company by Carole Cadwalladr, in the Observer, reveal that in every respect save one important one, CA looks like a standard-issue psychological warfare outfit of the kind retained by political parties – and sometimes national security services – since time immemorial. It did, however, have one unique selling proposition, namely its ability to offer “psychographic” services: voter-targeting strategies allegedly derived by analysing the personal data of more than 50 million US users of Facebook.

The story of how those data made the journey from Facebook’s servers to Cambridge Analytica’s is now widely known. But it is also widely misunderstood…

Read on

Why you can’t believe what you see (or hear)

This morning’s Observer column:

When John F Kennedy was assassinated in Dallas on 22 November 1963, he was on his way to deliver a speech to the assembled worthies of the city. A copy of his script for the ill-fated oration was later presented by Lyndon Johnson to Stanley Marcus, head of the department store chain Neiman Marcus, whose daughter was in the expectant audience that day.

The text has long been available on the internet and it makes for poignant reading, not just because of what happened at Dealey Plaza that day, but because large chunks of it look eerily prescient in the age of Trump. JFK was a terrific public speaker who employed superb speechwriters (especially Theodore Sorensen). His speeches were invariably elegant and memorable: he had a great eye for a good phrase, and his delivery was usually faultless. So his audience in Dallas knew that they were in for a treat – until Lee Harvey Oswald terminated the dream.

Last week, 55 years on, we finally got to hear what Kennedy’s audience might have heard…

Read on

In surveillance capitalism, extremism is good for business

This morning’s Observer column:

Zeynep Tufecki is one of the shrewdest writers on technology around. A while back, when researching an article on why (and how) Donald Trump appealed to those who supported him, she needed some direct quotes from the man himself and so turned to YouTube, which has a useful archive of videos of his campaign rallies. She then noticed something interesting. “YouTube started to recommend and ‘autoplay’ videos for me,” she wrote, “that featured white supremacist rants, Holocaust denials and other disturbing content.”

Since Tufecki was not in the habit of watching far-right fare on YouTube, she wondered if this was an exclusively rightwing phenomenon. So she created another YouTube account and started watching Hillary Clinton’s and Bernie Sanders’s campaign videos, following the accompanying links suggested by YouTube’s “recommender” algorithm. “Before long,” she reported, “I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of 11 September. As with the Trump videos, YouTube was recommending content that was more and more extreme.”

Read on

How to stay sane on Twitter: ignore retweets

This morning’s Observer column:

When Twitter first broke cover in July 2006, the initial reaction in the non-geek community was derisive incredulity. First of all, there was the ludicrous idea of a “tweet” – not to mention the metaphor of “twittering”, which, after all, is what small birds do. Besides, what could one usefully say in 140 characters? To the average retired colonel (AKA Daily Telegraph reader), Twitter summed up the bird-brained frivolity of the internet era, providing further evidence that the world was going to the dogs.

And now? It turns out that the aforementioned colonel might have been right. For one of the things you can do with a tweet is declare nuclear war. Another thing you can do with Twitter is to bypass the mainstream media, ignore the opinion polls, spread lies and fake news without let or hindrance and get yourself elected president of the United States.

How did it come to this?

Read on

Fixing the future?

My Observer review of Andrew Keen’s How to Fix the Future: Staying Human in the Digital Age:

Many years ago the cultural critic Neil Postman predicted that the future of humanity lay somewhere in the area between the dystopian nightmares of two English writers – George Orwell and Aldous Huxley. Orwell believed that we would be destroyed by the things we fear – surveillance and thought-control; Huxley thought that our undoing would be the things that delight us – that our rulers would twig that entertainment is more efficient than coercion as a means of social control.

Then we invented the internet, a technology that – it turned out – gave us both nightmares at once: comprehensive surveillance by states and corporations on the one hand; and, on the other, a strange kind of passive addiction to devices, apps and services which, like the drug soma in Huxley’s Brave New World, possess “all the advantages of Christianity and alcohol and none of their defects”.

The great irony, of course, is that not all of this was inevitable…

Read on

What happens in China stays in China. Ask Apple

This morning’s Observer column:

Here’s your starter for 10. Question: Apple’s website contains the following bold declaration: “At Apple we believe privacy is a fundamental human right.” What ancient English adage does this bring to mind? Answer: “Fine words butter no parsnips.” In other words, what matters is not what you say, but what you do.

What brings this to mind is the announcement that from now on, iCloud data generated by Apple users with a mainland Chinese account will be stored and managed by a Chinese data management firm – Guizhou-Cloud Big Data (GCBD). “With effect from 28 February 2018,” the notice reads, “iCloud services associated with your Apple ID will be operated by GCBD. Use of these services and all the data you store with iCloud – including photos, videos, documents and backups – will be subject to the terms and conditions of iCloud operated by GCBD.”

Read on

Managing the future that’s already here

This morning’s Observer column:

As the science fiction novelist William Gibson famously observed: “The future is already here – it’s just not very evenly distributed.” I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat (or promise, depending on your point of view) of “superintelligent” machines, ie ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them. The likelihood (or mirage) of such machines still remains a distant prospect, a point made by the leading AI researcher Andrew Ng, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.

That seems about right to me…

Read on

Regulating the cloud

This morning’s Observer column:

Cloud computing is just a metaphor. It has its origins in the way network engineers in the late-1970s used to represent the internet as an amorphous entity when they were discussing what was happening with computers at a local level. They just drew the net as a cartoonish cloud to represent a fuzzy space in which certain kinds of taken-for-granted communication activities happened. But since clouds are wispy, insubstantial things that some people love, the fact that what went on in the computing cloud actually involved inscrutable, environmentally destructive and definitely non-fuzzy server farms owned by huge corporations led to suspicions that the metaphor was actually a cosy euphemism, formulated to obscure a more sinister reality…

Read on

Theresa May’s pious hopes for Facebook

This morning’s Observer column:

It has taken an age, but at last politicians seem to be waking up to the societal problems posed by the dominance of certain tech firms – notably Facebook, Twitter and Google – and in particular the way they are allowing their users to pollute the public sphere with extremist rhetoric, hate speech, trolling and multipurpose abusiveness.

The latest occupant of the “techlash” bandwagon is Theresa May, who at the time of writing was still the UK’s prime minister…

Read on

Amazon’s move into healthcare

This morning’s Observer column about the collaboration between Amazon, Warren Buffett and JP Morgan:

Launching the initiative with his customary folksy bluntness, Buffett said that “the ballooning costs of healthcare act as a hungry tapeworm on the American economy. Our group does not come to this problem with answers. But we also do not accept it as inevitable.” If this – plus the fact that the new venture is to be a not-for-profit enterprise – was intended to be soothing, then it failed. The announcement immediately wiped billions off the valuations of the corporate tapeworms that have for decades fastened like leeches on the US healthcare system. And it’s not Buffett that scares them, but Jeff Bezos, Amazon’s chief executive and founder.

They’re right to to be scared…

Read on