How Microsoft reinvented itself

This morning’s Observer column:

It may have escaped your attention, but Microsoft recently became the third company in history to reach a valuation of one trillion dollars. To which the standard reaction, I have discovered, is: “Eh? Microsoft!!!” Wasn’t that the boring old monolith fixated on desktop products and operating systems that missed out on the smartphone revolution? The company that Bill Gates used to run before he decided to devote himself full-time to giving his money away? The company whose Exchange Server is the bane of every office-worker’s daily grind? The ruthless monopolist who missed the world wide web and then set out to exterminate the one company – Netscape – that hadn’t?

Yes, that Microsoft. Given the company’s history, this is surely the greatest comeback since Lazarus. But with one difference: where Lazarus’s resurrection was (according to the New Testament) instantaneous, Microsoft’s took longer. How this happened is a story that will keep MBA students occupied for decades, but with the benefit of hindsight, we can now see that it has three main strands…

Read on

The privacy paradox

This morning’s Observer column:

A dark shadow looms over our networked world. It’s called the “privacy paradox”. The main commercial engine of this world involves erosion of, and intrusions upon, our privacy. Whenever researchers, opinion pollsters and other busybodies ask people if they value their privacy, they invariably respond with a resounding “yes”. The paradox arises from the fact that they nevertheless continue to use the services that undermine their beloved privacy.

If you want confirmation, then look no further than Facebook. In privacy-scandal terms, 2018 was an annus horribilis for the company. Yet the results show that by almost every measure that matters to Wall Street, it has had a bumper year. The number of daily active users everywhere is up; average revenue per user is up 19% on last year, while overall revenue for the last quarter of 2018 is 30.4% up on the same quarter in 2017. In privacy terms, the company should be a pariah. At least some of its users must be aware of this. But it apparently makes no difference to their behaviour.

For a long time, people attributed the privacy paradox to the fact that most users of Facebook didn’t actually understand the ways their personal information was being appropriated and used…

Read on

StreetView leads us down some unexpected pathways

This morning’s Observer column:

Street View was a product of Google’s conviction that it is easier to ask for forgiveness than for permission, an assumption apparently confirmed by the fact that most jurisdictions seemed to accept the photographic coup as a fait accompli. There was pushback in a few European countries, notably Germany and Austria, with citizens demanding that their properties be blurred out; there was also a row in 2010 when it was revealed that Google had for a time collected and stored data from unencrypted domestic wifi routers. But broadly speaking, the company got away with its coup.

Most of the pushback came from people worried about privacy. They objected to images showing men leaving strip clubs, for example, protesters at an abortion clinic, sunbathers in bikinis and people engaging in, er, private activities in their own backyards. Some countries were bothered by the height of the cameras – in Japan and Switzerland, for example, Google had to lower their height so they couldn’t peer over fences and hedges.

These concerns were what one might call first-order ones, ie worries triggered by obvious dangers of a new technology. But with digital technology, the really transformative effects may be third- or fourth-order ones. So, for example, the internet leads to the web, which leads to the smartphone, which is what enabled Uber. And in that sense, the question with Street View from the beginning was: what will it lead to – eventually?

One possible answer emerged last week…

Read on

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on

Finally, a government takes on the tech companies

This morning’s Observer column:

On Monday last week, the government published its long-awaited white paper on online harms. It was launched at the British Library by the two cabinet ministers responsible for it – Jeremy Wright of the Department for Digital, Culture, Media and Sport (DCMS) and the home secretary, Sajid Javid. Wright was calm, modest and workmanlike in his introduction. Javid was, well, more macho. The social media companies had had their chances to put their houses in order. “They failed,” he declared. “I won’t let them fail again.” One couldn’t help feeling that he had one eye on the forthcoming hustings for the Tory leadership.

Nevertheless, this white paper is a significant document…

Read on

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.

Moral crumple zones

This morning’s Observer column:

This mindset prompts Dr Elish to coin the term “moral crumple zone” to describe the role assigned to humans who find themselves in the positions that the Three Mile Island operators, the Air France pilots – and the safety driver in the Uber car – occupied. It describes how responsibility for an action may be wrongly attributed to a human being who had limited control over the behaviour of an automated or autonomous system.

“While the crumple zone in a car is meant to protect the human driver,” she writes, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. What is unique about the concept of a moral crumple zone is that it highlights how structural features of a system and the media’s portrayal of accidents may inadvertently take advantage of human operators (and their tendency to become “liability sponges”) to fill the gaps in accountability that may arise in the context of new and complex systems.”

Read on

Mainstreaming atrocity

This morning’s Observer column:

The most worrying thought that comes from immersion in accounts of the tech companies’ struggle against the deluge of uploads is not so much that murderous fanatics seek publicity and notoriety from livestreaming their atrocities on the internet, but that astonishing numbers of other people are not just receptive to their messages, but seem determined to boost and amplify their impact by “sharing” them.

And not just sharing them in the sense of pressing the “share” button. What YouTube engineers found was that the deluge contained lots of copies and clips of the Christchurch video that had been deliberately tweaked so that they would not be detected by the company’s AI systems. A simple way of doing this, it turned out, was to upload a video recording of a computer screen taken from an angle. The content comes over loud and clear, but the automated filter doesn’t recognise it.

That there are perhaps tens – perhaps hundreds – of thousands of people across the world who will do this kind of thing is a really scary discovery…

Read on

Zuckerberg’s latest ‘vision’

This morning’s Observer column:

Dearly beloved, our reading this morning is taken from the latest Epistle of St Mark to the schmucks – as members of his 2.3 billion-strong Church of Facebook are known. The purpose of the epistle is to outline a new “vision” that St Mark has for the future of privacy, a subject that is very close to his wallet – which is understandable, given that he has acquired an unconscionable fortune from undermining it.

“As I think about the future of the internet,” he writes (revealingly conflating his church with the infrastructure on which it runs), “I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”

Quite so…

Read on

The 5G enigma

This morning’s Observer column:

The dominant company in the market at the moment is Huawei, a $100bn giant which is the world’s largest supplier of telecoms equipment and its second largest smartphone maker. In the normal course of events, therefore, we would expect that the core networks of western mobile operators would have a lot of its kit in them. And initially, that’s what looked like happening. But in recent months someone has pressed the pause button.

The prime mover in this is the US, which has banned government agencies from using Huawei (and ZTE) equipment and called on its allies to do the same. The grounds for this are national security concerns about hidden “backdoors”: it would be risky to have a company so close to the Chinese government building key parts of American critical infrastructure. Last week Huawei filed a lawsuit against the US government over the ban. New Zealand and Australia have obligingly complied with the ban, blocking the use of Huawei’s equipment in 5G networks. And last December BT announced that it was even removing Huawei kit from parts of its 4G network.

Other countries – notably Japan and Germany – have proved less compliant; the German Data Commissioner was even tactless enough to point out that “the US itself once made sure that backdoor doors were built into Cisco hardware”.

The UK’s position is interestingly enigmatic…

Read on