So there is a God, after all

One of the things that cheered me up no end this weekend was the way Uber’s IPO flopped — at least in comparison with the $120B fantasies of the punters who had invested in it on the assumption that it would be the winner-that-took-all in the market for mobility. The assumption of the company was that it wold be valued at $100B at the IPO, but in fact it wound up at $70B. Which means that a significant number of investors are probably left owning shares that are worth less than they paid for them in more recent funding rounds. Since the Saudi royals are among those investors, it couldn’t have happened to nastier people.

The technical is political. Now what?

Bruce Schneier has been valiantly going on about this for a while. Once upon a time, digital technology didn’t have many social, political or democratic ramifications. Those days are over. Universities, companies, software engineers and governments need to think about this — and tool up for it. Here’s an excerpt from one of Bruce’s recent posts on the subject:

Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

Yep.

Lessons of history

From a remarkable essay about Leonardo da Vinci by historian Ian Goldin1 in this weekend’s Financial Times, sadly behind a paywall:

“The third and most vital lesson of the Renaissance is that when things change more quickly, people get left behind more quickly. The Renaissance ended because the first era of global commerce and information revolution led to widening uncertainty and anxiety. The printing revolution provided populists with the means to challenge old authorities and channel the discontent that arose from the highly uneven distribution of the gains and losses from newly globalising commerce and accelerating technological change.

The Renaissance teaches us that progress cannot be taken for granted. The faster things change, the greater of people being left behind. And the greater their anger.

Sound familiar? And then…

Renaissance Florence was famously liberal-minded until a loud demagogue filled in the majority’s silence with rage and bombast. The firebrand preacher Girolamo Savonarola tapped into the fear that citizens felt about the pace of change and growing inequality, as well as the widespread anger toward the rampant corruption of the elite. Seizing on the new capacity for cheap print, he pioneered the political pamphlet, offering his followers the prospect of an afterlife in heaven while their opponents were condemned to hell. His mobilisation of indignation — combined with straightforward thuggery — deposed the Medicis, following which he launched a campaign of public purification, symbolised by the burning of books, cosmetics, jewellery, musical instruments and art, culminating in the 1497 Bonfire of the Vanities”.

Now of course history doesn’t really repeat itself. Still… some of this seems eerily familiar.

StreetView leads us down some unexpected pathways

This morning’s Observer column:

Street View was a product of Google’s conviction that it is easier to ask for forgiveness than for permission, an assumption apparently confirmed by the fact that most jurisdictions seemed to accept the photographic coup as a fait accompli. There was pushback in a few European countries, notably Germany and Austria, with citizens demanding that their properties be blurred out; there was also a row in 2010 when it was revealed that Google had for a time collected and stored data from unencrypted domestic wifi routers. But broadly speaking, the company got away with its coup.

Most of the pushback came from people worried about privacy. They objected to images showing men leaving strip clubs, for example, protesters at an abortion clinic, sunbathers in bikinis and people engaging in, er, private activities in their own backyards. Some countries were bothered by the height of the cameras – in Japan and Switzerland, for example, Google had to lower their height so they couldn’t peer over fences and hedges.

These concerns were what one might call first-order ones, ie worries triggered by obvious dangers of a new technology. But with digital technology, the really transformative effects may be third- or fourth-order ones. So, for example, the internet leads to the web, which leads to the smartphone, which is what enabled Uber. And in that sense, the question with Street View from the beginning was: what will it lead to – eventually?

One possible answer emerged last week…

Read on

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on

As she used to be

From a National Geographic photograph by Eric Kruszewski.

Source

It’s not all bad news. The wonderful (and, sadly, late) Andrew Fallon made an intensive and comprehensive laser-scan of the entire building some years ago. Alexis Madrigal tells the story here. So a reference blueprint (should that be dataprint?) exists from which restorers can work.

Moral crumple zones

This morning’s Observer column:

This mindset prompts Dr Elish to coin the term “moral crumple zone” to describe the role assigned to humans who find themselves in the positions that the Three Mile Island operators, the Air France pilots – and the safety driver in the Uber car – occupied. It describes how responsibility for an action may be wrongly attributed to a human being who had limited control over the behaviour of an automated or autonomous system.

“While the crumple zone in a car is meant to protect the human driver,” she writes, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. What is unique about the concept of a moral crumple zone is that it highlights how structural features of a system and the media’s portrayal of accidents may inadvertently take advantage of human operators (and their tendency to become “liability sponges”) to fill the gaps in accountability that may arise in the context of new and complex systems.”

Read on

The ‘horseless carriage’ morphs into the horse

From Kara Swisher:

I will die before I buy another car.

I don’t say that because I am particularly old or sick, but because I am at the front end of one of the next major secular trends in tech. Owning a car will soon be like owning a horse — a quaint hobby, an interesting rarity and a cool thing to take out for a spin on the weekend.

Before you object, let me be clear: I will drive in cars until I die. But the concept of actually purchasing, maintaining, insuring and garaging an automobile in the next few decades?

Finished.

Swisher has form in this area. Many years ago, long before the smartphone, she cancelled her landline phone contract on the grounds that in due course most other people would do so too. (After all, why should phones be tethered to the wall, like goats?) The statistics on how many younger people only have a mobile phone confirm her far-sighted hunch. As far as cars are concerned, though, it’ll probably come down to whether you are an urban or a rural dweller — which partly explains the gilets jaunes crisis in France.

The 5G enigma

This morning’s Observer column:

The dominant company in the market at the moment is Huawei, a $100bn giant which is the world’s largest supplier of telecoms equipment and its second largest smartphone maker. In the normal course of events, therefore, we would expect that the core networks of western mobile operators would have a lot of its kit in them. And initially, that’s what looked like happening. But in recent months someone has pressed the pause button.

The prime mover in this is the US, which has banned government agencies from using Huawei (and ZTE) equipment and called on its allies to do the same. The grounds for this are national security concerns about hidden “backdoors”: it would be risky to have a company so close to the Chinese government building key parts of American critical infrastructure. Last week Huawei filed a lawsuit against the US government over the ban. New Zealand and Australia have obligingly complied with the ban, blocking the use of Huawei’s equipment in 5G networks. And last December BT announced that it was even removing Huawei kit from parts of its 4G network.

Other countries – notably Japan and Germany – have proved less compliant; the German Data Commissioner was even tactless enough to point out that “the US itself once made sure that backdoor doors were built into Cisco hardware”.

The UK’s position is interestingly enigmatic…

Read on

Quote of the day

“When it’s impossible to distinguish facts from fraud, actual facts lose their power. Dissidents can end up putting their lives on the line to post a picture documenting wrongdoing only to be faced with an endless stream of deliberately misleading claims: that the picture was taken 10 years ago, that it’s from somewhere else, that it’s been doctored.

As we shift from an era when realistic fakes were expensive and hard to create to one where they’re cheap and easy, we will inevitably adjust our norms. In the past, it often made sense to believe something until it was debunked; in the future, for certain information or claims, it will start making sense to assume they are fake. Unless they are verified.”

Zeynep Tufecki