Apropos my Observer column, this from Frederic Filloux:
I would love to tell my Tesla to come and pick me up at home in Palo Alto and take me to the Sutter Street parking garage in San Francisco, “any level will be fine”, without intervention. But when will we get there?
If you believe Musk, it’s 2020. If you believe Chris Urmson, Google’s Director of Self-Driving Cars from 2013 to late 2016, it’s going to take three decades or more, although it’s possible that Waymo will ignore Page’s wise Level 5 edict and come out with Level 3 or 4 for a client such as Audi, Mercedes, or BMW well before that. Can we learn to codify partial automation the way we codified MPG, speed, and emissions? Personally, I trust we will, perhaps helped by numbers such as fatalities or, less morbidly, frequency of driver intervention.
This morning’s Observer column:
You don’t have to be a psychiatrist to wonder if Elon Musk, the founder of Tesla, is off his rocker. I mean to say, how many leaders of US public companies get into trouble with the US Securities and Exchange Commission for falsely claiming that they have secured funding to take their company private at $420 a share – and then get sued and fined $40m? Or can you imagine another CEO who deals with Wall Street analysts by swatting away questions about his company’s capital requirements as if they were flies. “Excuse me. Next. Next,” he replied to one guy who was pressing him on the subject. “Boring, bonehead questions are not cool. Next?”
The view from Wall Street is that Musk is too volatile to be in charge of a big and potentially important public company. The charitable view is less judgemental: it is that, while he may have a short fuse, he’s also a gifted, visionary disrupter. But even those who take this tolerant view were taken aback when he declared at a recent public event that he could see “one million robo-taxis on the roads by 2020”…
Although I’ve been a relatively early-adopter (aka sucker) of tech gadgets for much of my adult life, I’ve generally been relatively slow to upgrade my mobile phones. One factor was that I moved from being on a mobile contract to buying the phones outright and choosing the mobile data deal that suited me best. (I make very few voice calls.) I had an iPhone 4 for years, and when I eventually moved to an iPhone 6 I kept that for years too, reviving it a year ago with a new battery. (It’s the one on the right in the picture.) But in recent years it’s become sluggish and I began to find it increasingly hard on my ageing eyesight. I resisted the temptation to move to an iPhone X for various reasons: the outrageous prices, for one; and, more importantly, I don’t like Face ID and find fingerprint authentication very convenient for the few security-conscious services that I use.
So I had more or less resigned myself to soldiering on with the 6. After all, it did the jobs I needed it to do. And if I needed to read, there was always my iPad. But then I had a conversation with a friend who’d also had an iPhone 6 for years and whose circumstances had recently changed. He’s been spending a lot of time in hospital in the last six months, and didn’t want to be lugging around a laptop, or indeed even an iPad. He’d found, though, that it’s very difficult to run a busy life on such a small phone. So he bought a used iPhone 7 Plus on Amazon.
Next time we met, he extolled the virtues of the bigger format. It made it much easier to browse and to use web-forms, he reported. He found it easier to keep on top of his (formidable) email load — which he would normally have managed on a laptop. And the phone was quicker — a lot quicker — than his iPhone 6.
I followed his example and bought an iPhone 7 Plus on Amazon. My conclusion: it was good advice. The phone came with a year’s guarantee. It has a much faster processor. Web browsing is easier. The camera is a lot better. My email response rate has improved. I make fewer typing mistakes. And I’m using my iPad less. There are still things it’s useless for — blogging, for example. But overall, it’s been a revelation. It’ll do me for a few years, I think.
One of the things that cheered me up no end this weekend was the way Uber’s IPO flopped — at least in comparison with the $120B fantasies of the punters who had invested in it on the assumption that it would be the winner-that-took-all in the market for mobility. The assumption of the company was that it wold be valued at $100B at the IPO, but in fact it wound up at $70B. Which means that a significant number of investors are probably left owning shares that are worth less than they paid for them in more recent funding rounds. Since the Saudi royals are among those investors, it couldn’t have happened to nastier people.
Bruce Schneier has been valiantly going on about this for a while. Once upon a time, digital technology didn’t have many social, political or democratic ramifications. Those days are over. Universities, companies, software engineers and governments need to think about this — and tool up for it. Here’s an excerpt from one of Bruce’s recent posts on the subject:
Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.
More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.
This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.
The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.
From a remarkable essay about Leonardo da Vinci by historian Ian Goldin1 in this weekend’s Financial Times, sadly behind a paywall:
“The third and most vital lesson of the Renaissance is that when things change more quickly, people get left behind more quickly. The Renaissance ended because the first era of global commerce and information revolution led to widening uncertainty and anxiety. The printing revolution provided populists with the means to challenge old authorities and channel the discontent that arose from the highly uneven distribution of the gains and losses from newly globalising commerce and accelerating technological change.
The Renaissance teaches us that progress cannot be taken for granted. The faster things change, the greater of people being left behind. And the greater their anger.
Sound familiar? And then…
Renaissance Florence was famously liberal-minded until a loud demagogue filled in the majority’s silence with rage and bombast. The firebrand preacher Girolamo Savonarola tapped into the fear that citizens felt about the pace of change and growing inequality, as well as the widespread anger toward the rampant corruption of the elite. Seizing on the new capacity for cheap print, he pioneered the political pamphlet, offering his followers the prospect of an afterlife in heaven while their opponents were condemned to hell. His mobilisation of indignation — combined with straightforward thuggery — deposed the Medicis, following which he launched a campaign of public purification, symbolised by the burning of books, cosmetics, jewellery, musical instruments and art, culminating in the 1497 Bonfire of the Vanities”.
Now of course history doesn’t really repeat itself. Still… some of this seems eerily familiar.
This morning’s Observer column:
Street View was a product of Google’s conviction that it is easier to ask for forgiveness than for permission, an assumption apparently confirmed by the fact that most jurisdictions seemed to accept the photographic coup as a fait accompli. There was pushback in a few European countries, notably Germany and Austria, with citizens demanding that their properties be blurred out; there was also a row in 2010 when it was revealed that Google had for a time collected and stored data from unencrypted domestic wifi routers. But broadly speaking, the company got away with its coup.
Most of the pushback came from people worried about privacy. They objected to images showing men leaving strip clubs, for example, protesters at an abortion clinic, sunbathers in bikinis and people engaging in, er, private activities in their own backyards. Some countries were bothered by the height of the cameras – in Japan and Switzerland, for example, Google had to lower their height so they couldn’t peer over fences and hedges.
These concerns were what one might call first-order ones, ie worries triggered by obvious dangers of a new technology. But with digital technology, the really transformative effects may be third- or fourth-order ones. So, for example, the internet leads to the web, which leads to the smartphone, which is what enabled Uber. And in that sense, the question with Street View from the beginning was: what will it lead to – eventually?
One possible answer emerged last week…
This morning’s Observer column:
The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.
The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…
From a National Geographic photograph by Eric Kruszewski.
It’s not all bad news. The wonderful (and, sadly, late) Andrew Fallon made an intensive and comprehensive laser-scan of the entire building some years ago. Alexis Madrigal tells the story here. So a reference blueprint (should that be dataprint?) exists from which restorers can work.
This morning’s Observer column:
This mindset prompts Dr Elish to coin the term “moral crumple zone” to describe the role assigned to humans who find themselves in the positions that the Three Mile Island operators, the Air France pilots – and the safety driver in the Uber car – occupied. It describes how responsibility for an action may be wrongly attributed to a human being who had limited control over the behaviour of an automated or autonomous system.
“While the crumple zone in a car is meant to protect the human driver,” she writes, “the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. What is unique about the concept of a moral crumple zone is that it highlights how structural features of a system and the media’s portrayal of accidents may inadvertently take advantage of human operators (and their tendency to become “liability sponges”) to fill the gaps in accountability that may arise in the context of new and complex systems.”