Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.

Moral crumple zones

This morning’s Observer column:

This mindset prompts Dr Elish to coin the term “moral crumple zone” to describe the role assigned to humans who find themselves in the positions that the Three Mile Island operators, the Air France pilots – and the safety driver in the Uber car – occupied. It describes how responsibility for an action may be wrongly attributed to a human being who had limited control over the behaviour of an automated or autonomous system.

“While the crumple zone in a car is meant to protect the human driver,” she writes, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. What is unique about the concept of a moral crumple zone is that it highlights how structural features of a system and the media’s portrayal of accidents may inadvertently take advantage of human operators (and their tendency to become “liability sponges”) to fill the gaps in accountability that may arise in the context of new and complex systems.”

Read on

Mainstreaming atrocity

This morning’s Observer column:

The most worrying thought that comes from immersion in accounts of the tech companies’ struggle against the deluge of uploads is not so much that murderous fanatics seek publicity and notoriety from livestreaming their atrocities on the internet, but that astonishing numbers of other people are not just receptive to their messages, but seem determined to boost and amplify their impact by “sharing” them.

And not just sharing them in the sense of pressing the “share” button. What YouTube engineers found was that the deluge contained lots of copies and clips of the Christchurch video that had been deliberately tweaked so that they would not be detected by the company’s AI systems. A simple way of doing this, it turned out, was to upload a video recording of a computer screen taken from an angle. The content comes over loud and clear, but the automated filter doesn’t recognise it.

That there are perhaps tens – perhaps hundreds – of thousands of people across the world who will do this kind of thing is a really scary discovery…

Read on

Zuckerberg’s latest ‘vision’

This morning’s Observer column:

Dearly beloved, our reading this morning is taken from the latest Epistle of St Mark to the schmucks – as members of his 2.3 billion-strong Church of Facebook are known. The purpose of the epistle is to outline a new “vision” that St Mark has for the future of privacy, a subject that is very close to his wallet – which is understandable, given that he has acquired an unconscionable fortune from undermining it.

“As I think about the future of the internet,” he writes (revealingly conflating his church with the infrastructure on which it runs), “I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”

Quite so…

Read on

The 5G enigma

This morning’s Observer column:

The dominant company in the market at the moment is Huawei, a $100bn giant which is the world’s largest supplier of telecoms equipment and its second largest smartphone maker. In the normal course of events, therefore, we would expect that the core networks of western mobile operators would have a lot of its kit in them. And initially, that’s what looked like happening. But in recent months someone has pressed the pause button.

The prime mover in this is the US, which has banned government agencies from using Huawei (and ZTE) equipment and called on its allies to do the same. The grounds for this are national security concerns about hidden “backdoors”: it would be risky to have a company so close to the Chinese government building key parts of American critical infrastructure. Last week Huawei filed a lawsuit against the US government over the ban. New Zealand and Australia have obligingly complied with the ban, blocking the use of Huawei’s equipment in 5G networks. And last December BT announced that it was even removing Huawei kit from parts of its 4G network.

Other countries – notably Japan and Germany – have proved less compliant; the German Data Commissioner was even tactless enough to point out that “the US itself once made sure that backdoor doors were built into Cisco hardware”.

The UK’s position is interestingly enigmatic…

Read on

The dark side of recommendation engines

This morning’s Observer column:

My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.

Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.

Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…

Read on

Xi Jinping’s Little Red App

This morning’s Observer column:

We need to update Marx’s famous aphorism that “history repeats itself, the first time as tragedy, the second time as farce”. Version 2.0 reads: history repeats itself, the first time as tragedy, the second time as an app. Readers with long memories will remember Mao Zedong, the chairman (for life) of the Chinese Communist party who, in 1966, launched his Cultural Revolution to preserve Chinese communism by purging remnants of capitalist and traditional elements from Chinese society and reimposing his ideas (aka Maoism) as the dominant ideology within the party. One propaganda aid devised for this purpose was a little red book, printed in the hundreds of millions, entitled Quotations From Chairman Mao Tse-tung.

The “revolution” unleashed chaos in China: millions of citizens were persecuted, suffering outrageous abuses including public humiliation, arbitrary imprisonment, torture, hard labour, sustained harassment, seizure of property and worse…

Read on

The inescapable infrastructure of the networked world

This morning’s Observer column:

“Quitting smoking is easy,” said Mark Twain. “I’ve done it hundreds of times.” Much the same goes for smartphones. As increasing numbers of people begin to realise that they have a smartphone habit they begin to wonder if they should do something about the addiction. A few (a very few, in my experience) make the attempt, switching their phones off after work, say, and not rebooting them until the following morning. But almost invariably the dash for freedom fails and the chastened fugitive returns to the connected world.

The technophobic tendency to attribute this failure to lack of moral fibre should be resisted. It’s not easy to cut yourself off from a system that links you to friends, family and employer, all of whom expect you to be contactable and sometimes get upset when you’re not. There are powerful network effects in play here against which the individual addict is helpless. And while “just say no” may be a viable strategy in relation to some services (for example, Facebook), it is now a futile one in relation to the networked world generally. We’re long past the point of no return in our connected lives.

Most people don’t realise this. They imagine that if they decide to stop using Gmail or Microsoft Outlook or never buy another book from Amazon then they have liberated themselves from the tentacles of these giants. If that is indeed what they believe, then Kashmir Hill has news for them…

Read on

Why, sooner or later, societies are going to have to rein in the tech giants

My OpEd piece in yesterday’s Observer:

Spool forward to the tragic case of Molly Russell, the 14-year-old who killed herself after exploring her depression on Instagram. When her family looked into her account, they found sombre material about depression and suicide. Her father said that he believed the Facebook-owned platform had “helped kill my daughter”. This prompted Matt Hancock, the health secretary, to warn social media platforms to “purge” material relating to self-harm and suicide or face legislation that would compel them to do so. In response, Instagram and Pinterest (another social media outfit) issued the standard bromides about how they were embarking on a “full review” of their policies etc.

So is Molly’s case a crisis or a scandal? You know the answer. Nothing much will change because the business models of the platforms preclude it. Their commercial imperatives are remorselessly to increase both the number of their users and the intensity of those users’ “engagement” with the platforms. That’s what keeps the monetisable data flowing. Tragedies such as Molly Russell’s suicide are regrettable (and of course have PR downsides) but are really just the cost of running such a profitable business.

Asking these companies to change their business model, therefore, is akin to “asking a giraffe to shorten its neck”, as Shoshana Zuboff puts it in her fiery new book, The Age of Surveillance Capitalism…

Read on

How the technical is political

This morning’s Observer column:

The only computer game I’ve ever played involved no killing, zombies, heavily-armed monsters or quests for hidden keys. It was called SimCity and involved developing a virtual city from a patch of undeveloped land. The game enabled you to determine where to place development zones, infrastructure (like roads and power plants), landmarks and public services such as schools, parks, hospitals and fire stations. You could decide the tax rate, budget and social policy for your city – populated by Sims (for “simulated persons”, I guess) who had to live and work in the three zones you created for them: residential had houses and apartment buildings, commercial had shops and offices and industrial had factories, warehouses, laboratories and (oddly) farms.

SimCity was the brainchild of Will Wright, a software developer who had first made a splash with a shoot-’em-up (well, bomb-’em-flat) video game in which the player controls a helicopter dropping bombs on islands. But he became more fascinated with the islands than with the weaponry and started to wonder what a virtual city would be like – and how it would work. What he came up with was magical for its time: it gave the player a feeling of omnipotence: you decided where Sims should live, whether their electricity should come from nukes, where schools and offices should be located, how much tax they paid…

What you discovered early on, though, was that your decisions had consequences…

Read on