This morning’s Observer column:
A dark shadow looms over our networked world. It’s called the “privacy paradox”. The main commercial engine of this world involves erosion of, and intrusions upon, our privacy. Whenever researchers, opinion pollsters and other busybodies ask people if they value their privacy, they invariably respond with a resounding “yes”. The paradox arises from the fact that they nevertheless continue to use the services that undermine their beloved privacy.
If you want confirmation, then look no further than Facebook. In privacy-scandal terms, 2018 was an annus horribilis for the company. Yet the results show that by almost every measure that matters to Wall Street, it has had a bumper year. The number of daily active users everywhere is up; average revenue per user is up 19% on last year, while overall revenue for the last quarter of 2018 is 30.4% up on the same quarter in 2017. In privacy terms, the company should be a pariah. At least some of its users must be aware of this. But it apparently makes no difference to their behaviour.
For a long time, people attributed the privacy paradox to the fact that most users of Facebook didn’t actually understand the ways their personal information was being appropriated and used…
This morning’s Observer column:
Dearly beloved, our reading this morning is taken from the latest Epistle of St Mark to the schmucks – as members of his 2.3 billion-strong Church of Facebook are known. The purpose of the epistle is to outline a new “vision” that St Mark has for the future of privacy, a subject that is very close to his wallet – which is understandable, given that he has acquired an unconscionable fortune from undermining it.
“As I think about the future of the internet,” he writes (revealingly conflating his church with the infrastructure on which it runs), “I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”
Just the ticket. Nice spoof by the Council of Europe handed out at the CPDP conference. The pills are, as you’d expect, rather nice mints.
If you’re a cynic about corporate power and (lack of) responsibility — as I am — then Facebook is the gift that keeps on giving. Consider this from the NYT this morning:
For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.
The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.
The deals described in the documents benefited more than 150 companies — most of them tech businesses, including online retailers and entertainment sites, but also automakers and media organizations, and include Amazon, Microsoft and Yahoo. Their applications, according to the documents, sought the data of hundreds of millions of people a month, the records show. The deals, the oldest of which date to 2010, were all active in 2017. Some were still in effect this year.
Is there such a condition as scandal fatigue? If there is, then I’m beginning to suffer from it.
From today’a New York Times:
SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.
Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.
“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.
This morning’s Observer column:
Last week, Kevin Systrom and Mike Krieger, the co-founders of Instagram, announced that they were leaving Facebook, where they had worked since Mark Zuckerberg bought their company six years ago. “We’re planning on taking some time off to explore our curiosity and creativity again,” Systrom wrote in a statement on the Instagram blog. “Building new things requires that we step back, understand what inspires us and match that with what the world needs; that’s what we plan to do.”
Quite so. It’s always refreshing when young millionaires decide to spend more time with their money. (Facebook paid $715m for their little outfit when it acquired it; Instagram had 13 employees at the time.) But to those of us who have an unhealthy interest in what goes on at Facebook, the real question about Systrom’s and Krieger’s departure was: what took them so long?
I’m reading Nick Harkaway’s new novel, Gnomon which, like Dave Eggars’s The Circle, provides a gripping insight into our surveillance-driven future.
Before publication, Harkaway wrote an interesting blog post about why he embarked on the book. Here’s an excerpt from that post:
I remember the days.
I remember the halcyon days of 2014, when I started writing Gnomon and I thought I was going to produce a short book (ha ha ha) in a kind of Umberto Eco-Winterson-Borges mode, maybe with a dash of Bradbury and PKD, and it would be about realities and unreliable narrators and criminal angels in prisons made of time, and bankers and alchemists, and it would also be a warning about the dangers of creeping authoritarianism. (And no, you’re right: creatively speaking I had NO IDEA what I was getting myself into.)
I remember the luxury of saying “we must be precautionary about surveillance laws, about human rights violations, because one day the liberal democracies might start electing monsters and making bad pathways, and we’ll want solid protections from our governments’ over-reach.”
I remember the halcyon days of April 2016 when I thought I’d missed the boat and I hadn’t written a warning at all, but a sort of melancholic state of the nation, and I really did think things might get better from there. Then Brexit came – I was half expecting that – and then Trump – which I was really not – and now here we are, with the UK boiling as May’s government and Corbyn’s Labour sit on their hands and clock ticks down and the negotiating table is blank except for a few sheets of crumpled scrap paper, and the only global certainty seems to be that this US administration will try to wreck every decent thing the international community has attempted in my lifetime, with the occasional connivance of our own leaders when they aren’t busy tearing one another to bits.
And now I’m pretty sure I did write a warning after all.
This morning’s Observer column:
Thaler and Sunstein describe their philosophy as “libertarian paternalism”. What it involves is a design approach known as “choice architecture” and in particular controlling the default settings at any point where a person has to make a decision.
Funnily enough, this is something that the tech industry has known for decades. In the mid-1990s, for example, Microsoft – which had belatedly realised the significance of the web – set out to destroy Netscape, the first company to create a proper web browser. Microsoft did this by installing its own browser – Internet Explorer – on every copy of the Windows operating system. Users were free to install Netscape, of course, but Microsoft relied on the fact that very few people ever change default settings. For this abuse of its monopoly power, Microsoft was landed with an antitrust suit that nearly resulted in its breakup. But it did succeed in destroying Netscape.
When the EU introduced its General Data Protection Regulation (GDPR) – which seeks to give internet users significant control over uses of their personal data – many of us wondered how data-vampires like Google and Facebook would deal with the implicit threat to their core businesses. Now that the regulation is in force, we’re beginning to find out: they’re using choice architecture to make it as difficult as possible for users to do what is best for them while making it easy to do what is good for the companies.
We know this courtesy of a very useful 43-page report just out from the Norwegian Consumer Council, an organisation funded by the Norwegian government…
Well, well. This is something I hadn’t anticipated:
Under the European Union’s General Data Protection Regulation, companies will be required to completely erase the personal data of any citizen who requests that they do so. For businesses that use blockchain, specifically applications with publicly available data trails such as Bitcoin and Ethereum, truly purging that information could be impossible. “Some blockchains, as currently designed, are incompatible with the GDPR,” says Michèle Finck, a lecturer in EU law at the University of Oxford. EU regulators, she says, will need to decide whether the technology must be barred from the region or reconfigure the new rules to permit an uneasy coexistence.