Will the GDPR make blockchains illegal in Europe?

Well, well. This is something I hadn’t anticipated:

Under the European Union’s General Data Protection Regulation, companies will be required to completely erase the personal data of any citizen who requests that they do so. For businesses that use blockchain, specifically applications with publicly available data trails such as Bitcoin and Ethereum, truly purging that information could be impossible. “Some blockchains, as currently designed, are incompatible with the GDPR,” says Michèle Finck, a lecturer in EU law at the University of Oxford. EU regulators, she says, will need to decide whether the technology must be barred from the region or reconfigure the new rules to permit an uneasy coexistence.

What happens in China stays in China. Ask Apple

This morning’s Observer column:

Here’s your starter for 10. Question: Apple’s website contains the following bold declaration: “At Apple we believe privacy is a fundamental human right.” What ancient English adage does this bring to mind? Answer: “Fine words butter no parsnips.” In other words, what matters is not what you say, but what you do.

What brings this to mind is the announcement that from now on, iCloud data generated by Apple users with a mainland Chinese account will be stored and managed by a Chinese data management firm – Guizhou-Cloud Big Data (GCBD). “With effect from 28 February 2018,” the notice reads, “iCloud services associated with your Apple ID will be operated by GCBD. Use of these services and all the data you store with iCloud – including photos, videos, documents and backups – will be subject to the terms and conditions of iCloud operated by GCBD.”

Read on

Enter the GDPR

This morning’s Observer column:

Next year, 25 May looks like being a significant date. That’s because it’s the day that the European Union’s general data protection regulation (GDPR) comes into force. This may not seem like a big deal to you, but it’s a date that is already keeping many corporate executives awake at night. And for those who are still sleeping soundly, perhaps it would be worth checking that their organisations are ready for what’s coming down the line.

First things first. Unlike much of the legislation that emerges from Brussels, the GDPR is a regulation rather than a directive. This means that it becomes law in all EU countries at the same time; a directive, in contrast, allows each country to decide how its requirements are to be incorporated in national laws…

Read on

DeepMind or DeepMine?

This morning’s Observer column:

In July 2015, consultants working at the Royal Free hospital trust in London approached DeepMind, a Google-owned artificial intelligence firm that had no previous experience in healthcare, about developing software based on patient data from the trust. Four months later, the health records of 1.6 million identifiable patients were transferred to servers contracted by Google to process the data on behalf of DeepMind. The basic idea was that the company would create an app, called Streams, to help clinicians manage acute kidney injury (AKI), a serious disease that is linked to 40,000 deaths a year in the UK.

The first most people knew about this exciting new partnership was when DeepMind announced the launch of DeepMind Health on 24 February 2016…

Read on

Corporate candour and public sector cant

The UK Information Commissioner has completed her investigation into the deal between Google DeepMind and the Royal Free Hospital Trust which gave the company access to the health records of 1.6m NHS patients. The Commissioner concluded that:

Royal Free NHS Foundation Trust failed to comply with the Data Protection Act when it provided patient details to Google DeepMind.

The Trust provided personal data of around 1.6 million patients as part of a trial to test an alert, diagnosis and detection system for acute kidney injury.

But an ICO investigation found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test.

The Trust has been asked to commit to changes ensuring it is acting in line with the law by signing an undertaking.

My Cambridge colleague Julia Powles (now at Cornell) and Hal Hodgson of the Economist did a long and thorough investigation of this secret deal (using conventional investigative tools like Freedom of Information requests). This led to the publication of an excellent, peer-reviewed article on “Google DeepMind and healthcare in an age of algorithms”, published in the Springer journal Health and Technology in March. In the period up to and following publication, the authors were subjected to pretty fierce pushback from DeepMind. It was asserted, for example, that their article contained significant factual errors. But requests for information about these supposed ‘errors’ were not granted. As an observer of this corporate behaviour I was struck — and puzzled — by the divergence between DeepMind’s high-minded, holier-than-thou, corporate self-image and its aggressiveness in public controversy. And I wondered if this was a sign that Google iron had entered DeepMind’s soul. (The company was acquired by the search giant in 2014.)

But now all is sweetness and light, apparently. At any rate, DeepMind’s co-founder, Mustafa Suleyman and Dominic King, the Clinical Lead in DeepMind Health, have this morning published a contrite post on the company Blog. “We welcome the ICO’s thoughtful resolution of this case”, they write, “which we hope will guarantee the ongoing safe and legal handling of patient data for Streams [the codename for the collaboration between the company and the NHS Trust]”.

Although today’s findings are about the Royal Free, we need to reflect on our own actions too. In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health. We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better.

This is an intelligent and welcome response. Admitting to mistakes is the surest way to learn. But it’s amazing how few corporations and other organisations do it.

When I first read the draft of Julia’s and Hal’s paper my first thought was that the record of errors they had uncovered was not the product of malign intent, but rather a symptom of what happens when two groups of enthusiasts (consultants in the Royal Free; AI geeks in DeepMind) who were excited by the potential of machine learning in detecting and treating particular diseases. Each group was unduly overawed by the other, and in their determination to get this exciting partnership rolling they ignored (or perhaps were unaware of) the tedious hurdles that one (rightly) has to surmount if one seeks to use patient data for research. And once they had been caught out, defensive corporate instincts took over, preventing an intelligent response to the researchers’ challenge.

Interestingly, there are intimations of this in today’s DeepMind blog post. For example:

“Our initial legal agreement with the Royal Free in 2015 could have been much more detailed about the specific project underway, as well as the rules we had agreed to follow in handling patient information. We and the Royal Free replaced it in 2016 with a far more comprehensive contract … and we’ve signed similarly strong agreements with other NHS Trusts using Streams.”

“We made a mistake in not publicising our work when it first began in 2015, so we’ve proactively announced and published the contracts for our subsequent NHS partnerships.”

“In our initial rush to collaborate with nurses and doctors to create products that addressed clinical need, we didn’t do enough to make patients and the public aware of our work or invite them to challenge and shape our priorities.”

All good stuff. Now let’s see if they deliver on it.

Their NHS partners, however, are much less contrite — even though they are the focus of the Information Commissioner’s report. The Trust’s mealymouthed response says, in part:

“We have co-operated fully with the ICO’s investigation which began in May 2016 and it is helpful to receive some guidance on the issue about how patient information can be processed to test new technology. We also welcome the decision of the Department of Health to publish updated guidance for the wider NHS in the near future.”

This is pure cant. The Trust broke the law. So to say that “we have co-operated fully” and “it is helpful to receive some guidance on the issue about how patient information can be processed” is like a burglar claiming credit for co-operating with the cops and expressing gratitude for their advice on how to break-and-enter legally next time.

Nothing to hide? But you may still have something to fear.

This morning’s Observer column:

When Edward Snowden first revealed the extent of government surveillance of our online lives, the then foreign secretary, William (now Lord) Hague, immediately trotted out the old chestnut: “If you have nothing to hide, then you have nothing to fear.” This prompted replies along the lines of: “Well then, foreign secretary, can we have that photograph of you shaving while naked?”, which made us laugh, perhaps, but rather diverted us from pondering the absurdity of Hague’s remark. Most people have nothing to hide, but that doesn’t give the state the right to see them as fair game for intrusive surveillance.

During the hoo-ha, one of the spooks with whom I discussed Snowden’s revelations waxed indignant about our coverage of the story. What bugged him (pardon the pun) was the unfairness of having state agencies pilloried, while firms such as Google and Facebook, which, in his opinion, conducted much more intensive surveillance than the NSA or GCHQ, got off scot free. His argument was that he and his colleagues were at least subject to some degree of democratic oversight, but the companies, whose business model is essentially “surveillance capitalism”, were entirely unregulated.

He was right…

Read on

The privacy vs secrecy question properly framed

This neat formulation from a 2014 essay by Shoshanna Zuboff:

We often hear that our privacy rights have been eroded and secrecy has grown. But that way of framing things obscures what’s really at stake. Privacy hasn’t been eroded. It’s been expropriated. The difference in framing provides new ways to define the problem and consider solutions.

In the conventional telling, privacy and secrecy are treated as opposites. In fact, one is a cause and the other is an effect. Exercising our right to privacy leads to choice. We can choose to keep something secret or to share it, but we only have that choice when we first have privacy. Privacy rights confer decision rights. Privacy lets us decide where we want to be on the spectrum between secrecy and transparency in each situation. Secrecy is the effect; privacy is the cause.

I suggest that privacy rights have not been eroded, if anything they’ve multiplied. The difference now is how these rights are distributed. Instead of many people having some privacy rights, nearly all the rights have been concentrated in the hands of a few. On the one hand, we have lost the ability to choose what we keep secret, and what we share. On the other, Google, the NSA, and others in the new zone have accumulated privacy rights. How? Most of their rights have come from taking ours without asking. But they also manufactured new rights for themselves, the way a forger might print currency. They assert a right to privacy with respect to their surveillance tactics and then exercise their choice to keep those tactics secret.

We need more writing like this. On the phony ‘privacy vs security’ question, for example.

As George Lakoff pointed out many years ago (but only right-wingers listened), creative framing is the way to win both arguments and votes.

Amazon’s Echo seems great, but what does it hear?

Illustration by James Melaugh/Observer

This morning’s Observer column:

I bought it [the Echo] because it seemed to me that it might be a significant product and I have a policy of never writing about kit that I haven’t paid for myself. Having lived with the Echo for a few weeks I can definitely confirm its significance. It is a big deal, which explains why the company invested so much in it. (It’s said that 1,500 people worked on the project for four years, which sounds implausible until you remember that Apple has 800 people working on the iPhone’s camera alone). Amazon’s boss, Jeff Bezos, may not have bet the ranch on it (he has a pretty big ranch, after all) but the product nevertheless represents a significant investment. And the sales so far suggest that it may well pay off.

Once switched on and hooked up to one’s wifi network, the Echo sits there, listening for its trigger word, “Alexa”. So initially one feels like an idiot saying things such as: “Alexa, play Radio 4” or: “Alexa, who is Kim Kardashian?” (A genuine inquiry this, from a visitor who didn’t know the answer, which duly came in the form of Alexa reading the first lines of the relevant Wikipedia entry.)

Read on

Don’t let WhatsApp nudge you into sharing your data with Facebook

This morning’s Observer column:

When WhatsApp, the messaging app, launched in 2009, it struck me as one of the most interesting innovations I’d seen in ages – for two reasons. The first was that it seemed beautifully designed from the outset: it was clean, minimalist and efficient; and, secondly, it had a business model that did not depend on advertising. Instead, users got a year free, after which they paid a modest annual subscription.

Better still, the co-founder Jan Koum, seemed to have a very healthy aversion to the surveillance capitalism that underpins the vast revenues of Google, Facebook and co, in which they extract users’ personal data without paying for it, and then refine and sell it to advertisers…

Ah yes. That was then. But now…

Read on

Privacy is sooo… yesterday: Google’s Chief Economist

“One easy way to forecast the future is to predict that what rich people have now, middle class people will have in five years, and poor people will have in ten years. It worked for radio, TV, dishwashers, mobile phones, flat screen TV, and many other pieces of technology.

What do rich people have now? Chauffeurs? In a few more years, we’ll all have access to
driverless cars. Maids? We will soon be able to get housecleaning robots. Personal assistants? That’s Google Now. This area will be an intensely competitive environment: Apple already has Siri and Microsoft is hard at work at developing their own digital assistant. And don’t forget IBM’s Watson.

Of course there will be challenges. But these digital assistants will be so useful that everyone will want one, and the scare stories you read today about privacy concerns will just seem quaint and old­fashioned.”

Hal Varian, “Beyond Big Data”, NABE Annual Meeting, September 10, 2013, San Francisco.