Automation is more about tasks than ‘jobs’

This morning’s Observer column:

We are currently going through one of those periodic phases of “automation anxiety” when we become convinced that the robots are coming for our jobs. These fears are routinely pooh-poohed by historians and economists. The historians point out that machines have been taking away jobs since the days of Elizabeth I – who refused to grant William Lee a patent on his stocking frame on the grounds that it would take work away from those who knitted by hand. And while the economists concede that machines do indeed destroy some jobs, they point out that the increased productivity that they enable has generally created more new jobs (and industries) than they displaced.

Faced with this professional scepticism, tech evangelists and doom-mongers fall back on the same generic responses: that historical scepticism is based on the complacent assumption that the past is a reliable guide to the future; and that “this time is different”. And whereas in the past it was lower-skilled work that was displaced, the jobs that will be lost in the coming wave of smart machines are ones that we traditionally regard as “white-collar” or middle-class. And that would be a very big deal, because if there’s no middle class the prospects for the survival of democracy are poor.

What’s striking about this fruitless, ongoing debate is how few participants seem to be interested in the work that people actually do…

Read on

An essential anti-hype kit

From a terrific LSE blog post by Martin Walker.

He proposes seven ways to identify Fintech technologies that do not deserve the hype. It seems to me that most of his questions apply to information technology generally.

  1. The technology claims to solve a problem that did not exist before and was actually created by the nature of the new technology.
  2. A small part of the functionality of an existing system is implemented using the new technology and is claimed as a great success.
  3. No thought has been given to the costs and complexities of integrating the new technology with existing infrastructure.
  4. The technology is new and original but the creators are incapable of explaining how it would be any better at solving real world problems than existing technology.
  5. The technology would fail to meet legal and regulatory requirements if treated on the same basis as existing technologies.
  6. The advocates of the technology claim you “do not need to understand how it works, you just have to believe that it will change the world”.
  7. Criticism or even just questions are dismissed by referring to adoption/hype cycles that show you are going through a period of negativity before ultimate success.

Great stuff.

Corporate candour and public sector cant

The UK Information Commissioner has completed her investigation into the deal between Google DeepMind and the Royal Free Hospital Trust which gave the company access to the health records of 1.6m NHS patients. The Commissioner concluded that:

Royal Free NHS Foundation Trust failed to comply with the Data Protection Act when it provided patient details to Google DeepMind.

The Trust provided personal data of around 1.6 million patients as part of a trial to test an alert, diagnosis and detection system for acute kidney injury.

But an ICO investigation found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test.

The Trust has been asked to commit to changes ensuring it is acting in line with the law by signing an undertaking.

My Cambridge colleague Julia Powles (now at Cornell) and Hal Hodgson of the Economist did a long and thorough investigation of this secret deal (using conventional investigative tools like Freedom of Information requests). This led to the publication of an excellent, peer-reviewed article on “Google DeepMind and healthcare in an age of algorithms”, published in the Springer journal Health and Technology in March. In the period up to and following publication, the authors were subjected to pretty fierce pushback from DeepMind. It was asserted, for example, that their article contained significant factual errors. But requests for information about these supposed ‘errors’ were not granted. As an observer of this corporate behaviour I was struck — and puzzled — by the divergence between DeepMind’s high-minded, holier-than-thou, corporate self-image and its aggressiveness in public controversy. And I wondered if this was a sign that Google iron had entered DeepMind’s soul. (The company was acquired by the search giant in 2014.)

But now all is sweetness and light, apparently. At any rate, DeepMind’s co-founder, Mustafa Suleyman and Dominic King, the Clinical Lead in DeepMind Health, have this morning published a contrite post on the company Blog. “We welcome the ICO’s thoughtful resolution of this case”, they write, “which we hope will guarantee the ongoing safe and legal handling of patient data for Streams [the codename for the collaboration between the company and the NHS Trust]”.

Although today’s findings are about the Royal Free, we need to reflect on our own actions too. In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health. We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better.

This is an intelligent and welcome response. Admitting to mistakes is the surest way to learn. But it’s amazing how few corporations and other organisations do it.

When I first read the draft of Julia’s and Hal’s paper my first thought was that the record of errors they had uncovered was not the product of malign intent, but rather a symptom of what happens when two groups of enthusiasts (consultants in the Royal Free; AI geeks in DeepMind) who were excited by the potential of machine learning in detecting and treating particular diseases. Each group was unduly overawed by the other, and in their determination to get this exciting partnership rolling they ignored (or perhaps were unaware of) the tedious hurdles that one (rightly) has to surmount if one seeks to use patient data for research. And once they had been caught out, defensive corporate instincts took over, preventing an intelligent response to the researchers’ challenge.

Interestingly, there are intimations of this in today’s DeepMind blog post. For example:

“Our initial legal agreement with the Royal Free in 2015 could have been much more detailed about the specific project underway, as well as the rules we had agreed to follow in handling patient information. We and the Royal Free replaced it in 2016 with a far more comprehensive contract … and we’ve signed similarly strong agreements with other NHS Trusts using Streams.”

“We made a mistake in not publicising our work when it first began in 2015, so we’ve proactively announced and published the contracts for our subsequent NHS partnerships.”

“In our initial rush to collaborate with nurses and doctors to create products that addressed clinical need, we didn’t do enough to make patients and the public aware of our work or invite them to challenge and shape our priorities.”

All good stuff. Now let’s see if they deliver on it.

Their NHS partners, however, are much less contrite — even though they are the focus of the Information Commissioner’s report. The Trust’s mealymouthed response says, in part:

“We have co-operated fully with the ICO’s investigation which began in May 2016 and it is helpful to receive some guidance on the issue about how patient information can be processed to test new technology. We also welcome the decision of the Department of Health to publish updated guidance for the wider NHS in the near future.”

This is pure cant. The Trust broke the law. So to say that “we have co-operated fully” and “it is helpful to receive some guidance on the issue about how patient information can be processed” is like a burglar claiming credit for co-operating with the cops and expressing gratitude for their advice on how to break-and-enter legally next time.

What Steve (Jobs) hath wrought

My Observer column on the tenth anniversary of the iPhone:

The iPhone made Apple the world’s most valuable company (with a market capitalisation of $771.44bn when I last checked) but, in a way, that’s the least interesting thing about it. What’s more significant is that it sparked off the smartphone revolution that changed the way people accessed the internet. Steve Jobs’s seminal insight was that a mobile phone could be a powerful, networked handheld device which could also be used to make voice calls. Turning that insight into a marketable reality was a remarkable achievement – commemorated last week by the Computer History Museum in a fascinating two-hour series of recorded conversations with the engineers who built the phone.

The result of this revolution is a world in which most people carry their internet connection around in their bags and pockets. It’s a world of ubiquitous connectivity in which people are never offline and are increasingly addicted to their devices. It’s got to the point where someone has coined a new term – smombies (zombies on smartphones) – to describe pedestrians who walk into obstacles because they are looking at screens rather than at where they’re going…

Read on

Paranoia in the Valley

My Observer piece about US reaction to the Google fine:

The whopping €2.4bn fine levied by the European commission on Google for abusing its dominance as a search engine has taken Silicon Valley aback. It has also reignited American paranoia about the motives of European regulators, whom many Valley types seem to regard as stooges of Mathias Döpfner, the chief executive of German media group Axel Springer, president of the Federation of German Newspaper Publishers and a fierce critic of Google.

US paranoia is expressed in various registers. They range from President Obama’s observation in 2015 that “all the Silicon Valley companies that are doing business there [Europe] find themselves challenged, in some cases not completely sincerely. Because some of those countries have their own companies who want to displace ours”, to the furious off-the-record outbursts from senior tech executives after some EU agency or other has dared to challenge the supremacy of a US-based tech giant.

The overall tenor of these rants (based on personal experience of being on the receiving end) runs as follows. First, you Europeans don’t “get” tech; second, you don’t like or understand innovation; and third, you’re maddened by envy because none of you schmucks has been able to come up with a world-beating tech company…

Read on

Tyler Cowen on the iPhone

Like many others, I’ve been reflecting on the tenth anniversary of the iPhone. The Computer History Museum ran a fascinating two-hour show in which John Markoff talked to some of the people who designed and built the first phone. Tyler Cowen also devoted his latest Bloomberg column to it and, as usual, he’s come up with an angle that I hadn’t thought about much:

First, we’ve learned that, even in this age of bits and bytes, materials innovation still matters. The iPhone is behind the scenes a triumph of mining science, with a wide variety of raw materials and about 34 billion kilograms (75 billion pounds) of mined rock as an input to date, as discussed by Brian Merchant in his new and excellent book “The One Device: The Secret History of the iPhone”. A single iPhone has behind it the production of 34 kilos of gold ore, with 20.5 grams (0.72 ounces) of cyanide used to extract the most valuable parts of the gold.

Especially impressive as a material is the smooth touch-screen, and the user’s ability to make things happen by sliding, swiping, zooming and pinching it — the “multitouch” function. That advance relied upon particular materials, as the screen is chemically strengthened, made scrape-resistant and embedded with sensitive sensors. Multitouch wasn’t new, but Apple understood how to build it into a highly useful product.

Imagine putting together a production system that could make 1.2 billion of these devices.

Who’s missing from the tech industry? Er, women

This morning’s Observer column:

In front of me as I write this is a photograph. It’s an interior shot of one of the buildings on Facebook’s campus in California. It looks as big as an aircraft hangar, except that it has steel pillars at regular intervals. The pillars are labelled to enable people to find their desks. It’s all open-plan: nobody in this building – not even the founder of the company, Mark Zuckerberg – has a private office. And as far as the eye can see are desks with large-screen iMacs and Aeron desk chairs.

The people working at these desks are the folks who write, curate, design and maintain the algorithms that determine what appears in your Facebook newsfeed. I’ve been looking at the picture until my eyes begin to pixelate. What I’ve been trying to determine is how many women there are. I can see only three. So I ask a colleague who has better eyesight. She finds another two. And that’s it: as far as the eye can see, there are only five women in this picture.

Welcome to Silicon Valley, where most of the digital technology that currently dominates our lives is created…

Read on

And while we’re on the subject…

Recode has recently obtained a copy of an email that Uber’s CEO, Travis Kalanick, sent to all his staff before a staff outing in Miami in 2013.

The subject line read: “URGENT, URGENT – READ THIS NOW OR ELSE!!!!!,” he also noted at the top: “You better read this or I’ll kick your ass.”

Here’s the gist (from Recode):

Among the dos that Kalanick advised: “Have a great fucking time. This is a celebration! We’ve all earned it.” He also noted that “Miami’s transportation sucks ass,” the first shot in what became a battle to have Uber serve that city.

That was the tame part of the email, which Kalanick actually sent again the next year when there were 1,800 employees at Uber.

The don’ts advice was much more specific, giving information about everything from vomiting (a $200 “puke charge”) to drug use to throwing beer kegs off buildings to, well, proper fornication between employees (and sometimes, apparently, more than one).

Wrote Kalanick: “Do not have sex with another employee UNLESS a) you have asked that person for that privilege and they have responded with an emphatic ‘YES! I will have sex with you’ AND b) the two (or more) of you do not work in the same chain of command. Yes, that means that Travis will be celibate on this trip. #CEOLife #FML.”

FML, in internet slang, means “Fuck my life.” Welcome to Silicon Valley startup culture.

Enough said? If you were a woman, would you want to work in this frat-house culture?

What happened? And how did we get here?

Very thoughtful post by Willard McCarty in the Digital Humanities newsletter:

In the wake of the latest terrorist attack in London, the Scottish novelist and editor Andrew O’Hagan spoke on Radio 4 this morning about the Internet.

He recalled the millenarian hopes for it during his youth and contrasted them with what has become of it in the hands of those with evil intentions. His conclusion (spoken in sorrow) was that “We are not good enough as people to have an unrestricted network”. We need “a battalion of mindful editors” to regulate it, he said.

Perhaps neither seems surprising now; once, as O’Hagan remarked, the Internet seemed to many a cure for the world’s problems, as indeed the telephone did in its early days. But the darkness visible of terrorism isn’t the only sign of the times. I think, for example, of that unmoderated online forum recently shouted down during a discussion of the word ‘motherboard’ and then shut down to figure out where from here. Yes, professionally we live in a sheltered world, but the problems at the root of seemingly minor annoyances are very real — and applicable out there, where people run mortal risks.

Consider that the “battalion of mindful editors” requires the recruitment and training our universities should be able to give, indeed should be giving. But they are crippled, as social anthropologist Marilyn Strathern wrote in 1992, by an Enterprise Culture which “like a slick that smothers everything in shine” gives us workplaces “where students are supposed to mean numbers, public accountability must be interpreted as resource management, and education has to appear as a service for customers”.1


  1. Marilyn Strathern, “Introduction: Artificial Life”, in Reproducing the Future: Anthropology, Kinship and the New Reproductive Technologies (Manchester University Press, 1992), p.8.