Facebook: another routine scandal

From today’a New York Times:

SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.

Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.

“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.

Feeding the crocodile

This morning’s Observer column:

Last week, Kevin Systrom and Mike Krieger, the co-founders of Instagram, announced that they were leaving Facebook, where they had worked since Mark Zuckerberg bought their company six years ago. “We’re planning on taking some time off to explore our curiosity and creativity again,” Systrom wrote in a statement on the Instagram blog. “Building new things requires that we step back, understand what inspires us and match that with what the world needs; that’s what we plan to do.”

Quite so. It’s always refreshing when young millionaires decide to spend more time with their money. (Facebook paid $715m for their little outfit when it acquired it; Instagram had 13 employees at the time.) But to those of us who have an unhealthy interest in what goes on at Facebook, the real question about Systrom’s and Krieger’s departure was: what took them so long?

Read on

Global Warning

I’m reading Nick Harkaway’s new novel, Gnomon which, like Dave Eggars’s The Circle, provides a gripping insight into our surveillance-driven future.

Before publication, Harkaway wrote an interesting blog post about why he embarked on the book. Here’s an excerpt from that post:

I remember the days.

I remember the halcyon days of 2014, when I started writing Gnomon and I thought I was going to produce a short book (ha ha ha) in a kind of Umberto Eco-Winterson-Borges mode, maybe with a dash of Bradbury and PKD, and it would be about realities and unreliable narrators and criminal angels in prisons made of time, and bankers and alchemists, and it would also be a warning about the dangers of creeping authoritarianism. (And no, you’re right: creatively speaking I had NO IDEA what I was getting myself into.)

I remember the luxury of saying “we must be precautionary about surveillance laws, about human rights violations, because one day the liberal democracies might start electing monsters and making bad pathways, and we’ll want solid protections from our governments’ over-reach.”

Oops.

I remember the halcyon days of April 2016 when I thought I’d missed the boat and I hadn’t written a warning at all, but a sort of melancholic state of the nation, and I really did think things might get better from there. Then Brexit came – I was half expecting that – and then Trump – which I was really not – and now here we are, with the UK boiling as May’s government and Corbyn’s Labour sit on their hands and clock ticks down and the negotiating table is blank except for a few sheets of crumpled scrap paper, and the only global certainty seems to be that this US administration will try to wreck every decent thing the international community has attempted in my lifetime, with the occasional connivance of our own leaders when they aren’t busy tearing one another to bits.

And now I’m pretty sure I did write a warning after all.

He did.

Google, Facebook and the power to nudge users

This morning’s Observer column:

Thaler and Sunstein describe their philosophy as “libertarian paternalism”. What it involves is a design approach known as “choice architecture” and in particular controlling the default settings at any point where a person has to make a decision.

Funnily enough, this is something that the tech industry has known for decades. In the mid-1990s, for example, Microsoft – which had belatedly realised the significance of the web – set out to destroy Netscape, the first company to create a proper web browser. Microsoft did this by installing its own browser – Internet Explorer – on every copy of the Windows operating system. Users were free to install Netscape, of course, but Microsoft relied on the fact that very few people ever change default settings. For this abuse of its monopoly power, Microsoft was landed with an antitrust suit that nearly resulted in its breakup. But it did succeed in destroying Netscape.

When the EU introduced its General Data Protection Regulation (GDPR) – which seeks to give internet users significant control over uses of their personal data – many of us wondered how data-vampires like Google and Facebook would deal with the implicit threat to their core businesses. Now that the regulation is in force, we’re beginning to find out: they’re using choice architecture to make it as difficult as possible for users to do what is best for them while making it easy to do what is good for the companies.

We know this courtesy of a very useful 43-page report just out from the Norwegian Consumer Council, an organisation funded by the Norwegian government…

Read on

Will the GDPR make blockchains illegal in Europe?

Well, well. This is something I hadn’t anticipated:

Under the European Union’s General Data Protection Regulation, companies will be required to completely erase the personal data of any citizen who requests that they do so. For businesses that use blockchain, specifically applications with publicly available data trails such as Bitcoin and Ethereum, truly purging that information could be impossible. “Some blockchains, as currently designed, are incompatible with the GDPR,” says Michèle Finck, a lecturer in EU law at the University of Oxford. EU regulators, she says, will need to decide whether the technology must be barred from the region or reconfigure the new rules to permit an uneasy coexistence.

What happens in China stays in China. Ask Apple

This morning’s Observer column:

Here’s your starter for 10. Question: Apple’s website contains the following bold declaration: “At Apple we believe privacy is a fundamental human right.” What ancient English adage does this bring to mind? Answer: “Fine words butter no parsnips.” In other words, what matters is not what you say, but what you do.

What brings this to mind is the announcement that from now on, iCloud data generated by Apple users with a mainland Chinese account will be stored and managed by a Chinese data management firm – Guizhou-Cloud Big Data (GCBD). “With effect from 28 February 2018,” the notice reads, “iCloud services associated with your Apple ID will be operated by GCBD. Use of these services and all the data you store with iCloud – including photos, videos, documents and backups – will be subject to the terms and conditions of iCloud operated by GCBD.”

Read on

Enter the GDPR

This morning’s Observer column:

Next year, 25 May looks like being a significant date. That’s because it’s the day that the European Union’s general data protection regulation (GDPR) comes into force. This may not seem like a big deal to you, but it’s a date that is already keeping many corporate executives awake at night. And for those who are still sleeping soundly, perhaps it would be worth checking that their organisations are ready for what’s coming down the line.

First things first. Unlike much of the legislation that emerges from Brussels, the GDPR is a regulation rather than a directive. This means that it becomes law in all EU countries at the same time; a directive, in contrast, allows each country to decide how its requirements are to be incorporated in national laws…

Read on

DeepMind or DeepMine?

This morning’s Observer column:

In July 2015, consultants working at the Royal Free hospital trust in London approached DeepMind, a Google-owned artificial intelligence firm that had no previous experience in healthcare, about developing software based on patient data from the trust. Four months later, the health records of 1.6 million identifiable patients were transferred to servers contracted by Google to process the data on behalf of DeepMind. The basic idea was that the company would create an app, called Streams, to help clinicians manage acute kidney injury (AKI), a serious disease that is linked to 40,000 deaths a year in the UK.

The first most people knew about this exciting new partnership was when DeepMind announced the launch of DeepMind Health on 24 February 2016…

Read on

Corporate candour and public sector cant

The UK Information Commissioner has completed her investigation into the deal between Google DeepMind and the Royal Free Hospital Trust which gave the company access to the health records of 1.6m NHS patients. The Commissioner concluded that:

Royal Free NHS Foundation Trust failed to comply with the Data Protection Act when it provided patient details to Google DeepMind.

The Trust provided personal data of around 1.6 million patients as part of a trial to test an alert, diagnosis and detection system for acute kidney injury.

But an ICO investigation found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test.

The Trust has been asked to commit to changes ensuring it is acting in line with the law by signing an undertaking.

My Cambridge colleague Julia Powles (now at Cornell) and Hal Hodgson of the Economist did a long and thorough investigation of this secret deal (using conventional investigative tools like Freedom of Information requests). This led to the publication of an excellent, peer-reviewed article on “Google DeepMind and healthcare in an age of algorithms”, published in the Springer journal Health and Technology in March. In the period up to and following publication, the authors were subjected to pretty fierce pushback from DeepMind. It was asserted, for example, that their article contained significant factual errors. But requests for information about these supposed ‘errors’ were not granted. As an observer of this corporate behaviour I was struck — and puzzled — by the divergence between DeepMind’s high-minded, holier-than-thou, corporate self-image and its aggressiveness in public controversy. And I wondered if this was a sign that Google iron had entered DeepMind’s soul. (The company was acquired by the search giant in 2014.)

But now all is sweetness and light, apparently. At any rate, DeepMind’s co-founder, Mustafa Suleyman and Dominic King, the Clinical Lead in DeepMind Health, have this morning published a contrite post on the company Blog. “We welcome the ICO’s thoughtful resolution of this case”, they write, “which we hope will guarantee the ongoing safe and legal handling of patient data for Streams [the codename for the collaboration between the company and the NHS Trust]”.

Although today’s findings are about the Royal Free, we need to reflect on our own actions too. In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health. We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better.

This is an intelligent and welcome response. Admitting to mistakes is the surest way to learn. But it’s amazing how few corporations and other organisations do it.

When I first read the draft of Julia’s and Hal’s paper my first thought was that the record of errors they had uncovered was not the product of malign intent, but rather a symptom of what happens when two groups of enthusiasts (consultants in the Royal Free; AI geeks in DeepMind) who were excited by the potential of machine learning in detecting and treating particular diseases. Each group was unduly overawed by the other, and in their determination to get this exciting partnership rolling they ignored (or perhaps were unaware of) the tedious hurdles that one (rightly) has to surmount if one seeks to use patient data for research. And once they had been caught out, defensive corporate instincts took over, preventing an intelligent response to the researchers’ challenge.

Interestingly, there are intimations of this in today’s DeepMind blog post. For example:

“Our initial legal agreement with the Royal Free in 2015 could have been much more detailed about the specific project underway, as well as the rules we had agreed to follow in handling patient information. We and the Royal Free replaced it in 2016 with a far more comprehensive contract … and we’ve signed similarly strong agreements with other NHS Trusts using Streams.”

“We made a mistake in not publicising our work when it first began in 2015, so we’ve proactively announced and published the contracts for our subsequent NHS partnerships.”

“In our initial rush to collaborate with nurses and doctors to create products that addressed clinical need, we didn’t do enough to make patients and the public aware of our work or invite them to challenge and shape our priorities.”

All good stuff. Now let’s see if they deliver on it.

Their NHS partners, however, are much less contrite — even though they are the focus of the Information Commissioner’s report. The Trust’s mealymouthed response says, in part:

“We have co-operated fully with the ICO’s investigation which began in May 2016 and it is helpful to receive some guidance on the issue about how patient information can be processed to test new technology. We also welcome the decision of the Department of Health to publish updated guidance for the wider NHS in the near future.”

This is pure cant. The Trust broke the law. So to say that “we have co-operated fully” and “it is helpful to receive some guidance on the issue about how patient information can be processed” is like a burglar claiming credit for co-operating with the cops and expressing gratitude for their advice on how to break-and-enter legally next time.

Nothing to hide? But you may still have something to fear.

This morning’s Observer column:

When Edward Snowden first revealed the extent of government surveillance of our online lives, the then foreign secretary, William (now Lord) Hague, immediately trotted out the old chestnut: “If you have nothing to hide, then you have nothing to fear.” This prompted replies along the lines of: “Well then, foreign secretary, can we have that photograph of you shaving while naked?”, which made us laugh, perhaps, but rather diverted us from pondering the absurdity of Hague’s remark. Most people have nothing to hide, but that doesn’t give the state the right to see them as fair game for intrusive surveillance.

During the hoo-ha, one of the spooks with whom I discussed Snowden’s revelations waxed indignant about our coverage of the story. What bugged him (pardon the pun) was the unfairness of having state agencies pilloried, while firms such as Google and Facebook, which, in his opinion, conducted much more intensive surveillance than the NSA or GCHQ, got off scot free. His argument was that he and his colleagues were at least subject to some degree of democratic oversight, but the companies, whose business model is essentially “surveillance capitalism”, were entirely unregulated.

He was right…

Read on