StreetView leads us down some unexpected pathways

This morning’s Observer column:

Street View was a product of Google’s conviction that it is easier to ask for forgiveness than for permission, an assumption apparently confirmed by the fact that most jurisdictions seemed to accept the photographic coup as a fait accompli. There was pushback in a few European countries, notably Germany and Austria, with citizens demanding that their properties be blurred out; there was also a row in 2010 when it was revealed that Google had for a time collected and stored data from unencrypted domestic wifi routers. But broadly speaking, the company got away with its coup.

Most of the pushback came from people worried about privacy. They objected to images showing men leaving strip clubs, for example, protesters at an abortion clinic, sunbathers in bikinis and people engaging in, er, private activities in their own backyards. Some countries were bothered by the height of the cameras – in Japan and Switzerland, for example, Google had to lower their height so they couldn’t peer over fences and hedges.

These concerns were what one might call first-order ones, ie worries triggered by obvious dangers of a new technology. But with digital technology, the really transformative effects may be third- or fourth-order ones. So, for example, the internet leads to the web, which leads to the smartphone, which is what enabled Uber. And in that sense, the question with Street View from the beginning was: what will it lead to – eventually?

One possible answer emerged last week…

Read on

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on

Finally, a government takes on the tech companies

This morning’s Observer column:

On Monday last week, the government published its long-awaited white paper on online harms. It was launched at the British Library by the two cabinet ministers responsible for it – Jeremy Wright of the Department for Digital, Culture, Media and Sport (DCMS) and the home secretary, Sajid Javid. Wright was calm, modest and workmanlike in his introduction. Javid was, well, more macho. The social media companies had had their chances to put their houses in order. “They failed,” he declared. “I won’t let them fail again.” One couldn’t help feeling that he had one eye on the forthcoming hustings for the Tory leadership.

Nevertheless, this white paper is a significant document…

Read on

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.

Moral crumple zones

This morning’s Observer column:

This mindset prompts Dr Elish to coin the term “moral crumple zone” to describe the role assigned to humans who find themselves in the positions that the Three Mile Island operators, the Air France pilots – and the safety driver in the Uber car – occupied. It describes how responsibility for an action may be wrongly attributed to a human being who had limited control over the behaviour of an automated or autonomous system.

“While the crumple zone in a car is meant to protect the human driver,” she writes, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. What is unique about the concept of a moral crumple zone is that it highlights how structural features of a system and the media’s portrayal of accidents may inadvertently take advantage of human operators (and their tendency to become “liability sponges”) to fill the gaps in accountability that may arise in the context of new and complex systems.”

Read on

Mainstreaming atrocity

This morning’s Observer column:

The most worrying thought that comes from immersion in accounts of the tech companies’ struggle against the deluge of uploads is not so much that murderous fanatics seek publicity and notoriety from livestreaming their atrocities on the internet, but that astonishing numbers of other people are not just receptive to their messages, but seem determined to boost and amplify their impact by “sharing” them.

And not just sharing them in the sense of pressing the “share” button. What YouTube engineers found was that the deluge contained lots of copies and clips of the Christchurch video that had been deliberately tweaked so that they would not be detected by the company’s AI systems. A simple way of doing this, it turned out, was to upload a video recording of a computer screen taken from an angle. The content comes over loud and clear, but the automated filter doesn’t recognise it.

That there are perhaps tens – perhaps hundreds – of thousands of people across the world who will do this kind of thing is a really scary discovery…

Read on

Zuckerberg’s latest ‘vision’

This morning’s Observer column:

Dearly beloved, our reading this morning is taken from the latest Epistle of St Mark to the schmucks – as members of his 2.3 billion-strong Church of Facebook are known. The purpose of the epistle is to outline a new “vision” that St Mark has for the future of privacy, a subject that is very close to his wallet – which is understandable, given that he has acquired an unconscionable fortune from undermining it.

“As I think about the future of the internet,” he writes (revealingly conflating his church with the infrastructure on which it runs), “I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”

Quite so…

Read on

The 5G enigma

This morning’s Observer column:

The dominant company in the market at the moment is Huawei, a $100bn giant which is the world’s largest supplier of telecoms equipment and its second largest smartphone maker. In the normal course of events, therefore, we would expect that the core networks of western mobile operators would have a lot of its kit in them. And initially, that’s what looked like happening. But in recent months someone has pressed the pause button.

The prime mover in this is the US, which has banned government agencies from using Huawei (and ZTE) equipment and called on its allies to do the same. The grounds for this are national security concerns about hidden “backdoors”: it would be risky to have a company so close to the Chinese government building key parts of American critical infrastructure. Last week Huawei filed a lawsuit against the US government over the ban. New Zealand and Australia have obligingly complied with the ban, blocking the use of Huawei’s equipment in 5G networks. And last December BT announced that it was even removing Huawei kit from parts of its 4G network.

Other countries – notably Japan and Germany – have proved less compliant; the German Data Commissioner was even tactless enough to point out that “the US itself once made sure that backdoor doors were built into Cisco hardware”.

The UK’s position is interestingly enigmatic…

Read on

The dark side of recommendation engines

This morning’s Observer column:

My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.

Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.

Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…

Read on

Xi Jinping’s Little Red App

This morning’s Observer column:

We need to update Marx’s famous aphorism that “history repeats itself, the first time as tragedy, the second time as farce”. Version 2.0 reads: history repeats itself, the first time as tragedy, the second time as an app. Readers with long memories will remember Mao Zedong, the chairman (for life) of the Chinese Communist party who, in 1966, launched his Cultural Revolution to preserve Chinese communism by purging remnants of capitalist and traditional elements from Chinese society and reimposing his ideas (aka Maoism) as the dominant ideology within the party. One propaganda aid devised for this purpose was a little red book, printed in the hundreds of millions, entitled Quotations From Chairman Mao Tse-tung.

The “revolution” unleashed chaos in China: millions of citizens were persecuted, suffering outrageous abuses including public humiliation, arbitrary imprisonment, torture, hard labour, sustained harassment, seizure of property and worse…

Read on