Thursday 26 March, 2020

This blog is now also available as a once-a-day email. If you think this might work better for you why not subscribe here? (It’s free and there’s a 1-click unsubscribe if you subsequently decide you need to prune your inbox!) One email a day, in your inbox at 07:00 every morning.


Quote of the Day

“Those who would give up essential liberty to purchase a little temporary safety deserve neither.”

  • Benjamin Franklin

Should we cross the privacy Rubicon? Will we?

Maciej Ceglowski, a great privacy campaigner and one of the best online essayists around, (and also proprietor of Pinboard.in the best bookmarking site on the Internet) uses the Franklin quote above in a sobering reflection on the Coronavirus pandemic. His essay is prompted by the ongoing (and intensifying) debate about whether the current ‘lockdown+isolation’ strategy for ‘flattening the curve’ of infections) is economically, psychologically and politically sustainable.

Everybody knows that even when we’re through the initial crisis the disease will not have been eliminated. It’ll be back in waves, hopefully of lesser intensity and reach, and each wave may necessitate a briefer return to another lockdown regime. So the economic and other consequences could continue, perhaps for 18 months or more.

What should we do, therefore, after the initial outbreak is contained — or at least rendered manageable in terms of health-service capacity? Ideally, we should have a managed return to work with people who have had the virus and recovered from it (and thereby acquired immunity) able to work normally. But we can’t do that safely unless we have a vaccine (months away at best, a year at worst) or a way of identifying who is infectious and capable of infecting others.

There’s already a strategy for doing the latter task: test extensively and track contacts of those who are infections. That’s what South Korea, Taiwan and China seem to have been able to do. But in the UK we’re still ages away from being able to roll out a large-scale testing programme. (Getting testing up and running at scale is pretty challenging.) We will get there eventually, though, and when we do the next task will be to track the contacts of every infected person.

Trouble is: that kind of tracking is incredibly labour-intensive. But, says Ceglowski,

we could automate large parts of it with the technical infrastructure of the surveillance economy. It would not take a great deal to turn the ubiquitous tracking tools that follow us around online into a sophisticated public health alert system.

Every one of us now carries a mobile tracking device that leaves a permanent trail of location data. This data is individually identifiable, precise to within a few meters, and is harvested by a remarkable variety of devices and corporations, including the large tech companies, internet service providers, handset manufacturers, mobile companies, retail stores.

Anyone who has this data can retroactively reconstruct the movements of a person of interest, and track who they have been in proximity to over the past several days. Such a data set, combined with aggressive testing, offers the potential to trace entire chains of transmission in real time, and give early warning to those at highest risk.

So it’s possible to do it. Doing so will probably enable a return to some kind of economic normality. But if we use the technology for this purpose we will have crossed the Rubicon into nightmare territory. And if we do cross, there’s unlikely to be a way back — because once states have acquired access to this technology, they rarely give it up. So will we do it?

Ceglowski thinks that we should. After all, he says,

This proposal doesn’t require us to give up any liberty that we didn’t already sacrifice long ago, on the altar of convenience. The terrifying surveillance infrastructure this project requires exists and is maintained in good working order in the hands of private industry, where it is entirely unregulated and is currently being used to try to sell people skin cream. Why not use it to save lives?

The most troubling change this project entails is giving access to sensitive location data across the entire population to a government agency. Of course that is scary, especially given the track record of the Trump administration. The data collection would also need to be coercive (that is, no one should be able to opt out of it, short of refusing to carry a cell phone). As with any government surveillance program, there would be the danger of a ratchet effect, where what is intended as an emergency measure becomes the permanent state of affairs, like happened in the United States in the wake of the 2001 terrorist attacks.

“I am a privacy activist”, Ceglowski writes, “typing this through gritted teeth”.

But I am also a human being like you, watching a global calamity unfold around us. What is the point of building this surveillance architecture if we can’t use it to save lives in a scary emergency like this one?

Great essay. Worth reading in full.


Quarantine diary — Day 5

Link


Why do people keep buying Amazon Ring?

I’ve got a good friend who has an Amazon doorbell and seems tickled pink by it. Normally, this would worry me, but he’s a sophisticated techie and I’m sure his security precautions are good.

But that’s definitely not true for most of the thousands of people who are buying the devices.

The New York Times has a helpful piece aimed at these neophytes. It opens with some cautionary notes, though:

The internet-connected doorbell gadget, which lets you watch live video of your front porch through a phone app or website, has gained a reputation as the webcam that spies on you and that has failed to protect your data. Yet people keep buying it in droves.

Ring, which is owned by Amazon and based in Santa Monica, Calif., has generated its share of headlines, including how the company fired four employees over the last four years for watching customers’ videos. Last month, security researchers also found that Ring’s apps contained hidden code, which had shared customer data with third-party marketers. And in December, hackers hijacked the Ring cameras of multiple families, using the devices’ speakers to verbally assault some of them.

Sleepwalking into dystopia

This morning’s Observer column:

When the history of our time comes to be written, one of the things that will puzzle historians (assuming any have survived the climate cataclysm) is why we allowed ourselves to sleepwalk into dystopia. Ever since 9/11, it’s been clear that western democracies had embarked on a programme of comprehensive monitoring of their citizenry, usually with erratic and inadequate democratic oversight. But we only began to get a fuller picture of the extent of this surveillance when Edward Snowden broke cover in the summer of 2013.

For a time, the dramatic nature of the Snowden revelations focused public attention on the surveillance activities of the state. In consequence, we stopped thinking about what was going on in the private sector. The various scandals of 2016, and the role that network technology played in the political upheavals of that year, constituted a faint alarm call about what was happening, but in general our peaceful slumbers resumed: we went back to our smartphones and the tech giants continued their appropriation, exploitation and abuse of our personal data without hindrance. And this continued even though a host of academic studies and a powerful book by Shoshana Zuboff showed that, as the cybersecurity guru Bruce Schneier put it, “the business model of the internet is surveillance”.

The mystery is why so many of us are still apparently relaxed about what’s going on…

Read on

DNA databases are special

This morning’s Observer column:

Last week, at a police convention in the US, a Florida police officer revealed he had obtained a warrant to search the GEDmatch database of a million genetic profiles uploaded by users of the genealogy research site. Legal experts said this appeared to be the first time an American judge had approved such a warrant.

“That’s a huge game-changer,” observed Erin Murphy, a law professor at New York University. “The company made a decision to keep law enforcement out and that’s been overridden by a court. It’s a signal that no genetic information can be safe.”

At the end of the cop’s talk, he was approached by many officers from other jurisdictions asking for a copy of the successful warrant.

Apart from medical records, your DNA profile is the most sensitive and personal data imaginable. In some ways, it’s more revealing, because it can reveal secrets you don’t know you’re keeping, such as siblings (and sometimes parents) of whom you were unaware…

Read on

Another reason not to like Facebook Likes

From The Register:

Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.

The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.

The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.

Well, well.

The privacy paradox

This morning’s Observer column:

A dark shadow looms over our networked world. It’s called the “privacy paradox”. The main commercial engine of this world involves erosion of, and intrusions upon, our privacy. Whenever researchers, opinion pollsters and other busybodies ask people if they value their privacy, they invariably respond with a resounding “yes”. The paradox arises from the fact that they nevertheless continue to use the services that undermine their beloved privacy.

If you want confirmation, then look no further than Facebook. In privacy-scandal terms, 2018 was an annus horribilis for the company. Yet the results show that by almost every measure that matters to Wall Street, it has had a bumper year. The number of daily active users everywhere is up; average revenue per user is up 19% on last year, while overall revenue for the last quarter of 2018 is 30.4% up on the same quarter in 2017. In privacy terms, the company should be a pariah. At least some of its users must be aware of this. But it apparently makes no difference to their behaviour.

For a long time, people attributed the privacy paradox to the fact that most users of Facebook didn’t actually understand the ways their personal information was being appropriated and used…

Read on

Zuckerberg’s latest ‘vision’

This morning’s Observer column:

Dearly beloved, our reading this morning is taken from the latest Epistle of St Mark to the schmucks – as members of his 2.3 billion-strong Church of Facebook are known. The purpose of the epistle is to outline a new “vision” that St Mark has for the future of privacy, a subject that is very close to his wallet – which is understandable, given that he has acquired an unconscionable fortune from undermining it.

“As I think about the future of the internet,” he writes (revealingly conflating his church with the infrastructure on which it runs), “I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”

Quite so…

Read on

Facebook: (yet) another scandalous revelation

If you’re a cynic about corporate power and (lack of) responsibility — as I am — then Facebook is the gift that keeps on giving. Consider this from the NYT this morning:

For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.

The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.

The deals described in the documents benefited more than 150 companies — most of them tech businesses, including online retailers and entertainment sites, but also automakers and media organizations, and include Amazon, Microsoft and Yahoo. Their applications, according to the documents, sought the data of hundreds of millions of people a month, the records show. The deals, the oldest of which date to 2010, were all active in 2017. Some were still in effect this year.

Is there such a condition as scandal fatigue? If there is, then I’m beginning to suffer from it.

Facebook: another routine scandal

From today’a New York Times:

SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.

Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.

“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.