Even if you’re not on Facebook, you are still the product

This morning’s Observer column:

The old adage “if the service is free, then you are its product” needs updating. What it signified was that web services (like Facebook, Google, Yahoo et al) that do not charge users make their money by harvesting personal and behavioural data relating to those users and selling that data to advertisers. That’s still true, of course. But a more accurate version of the adage would now read something like this: if you use the web for anything (including paying for stuff) then you are also the product, because your data is being sold on to third parties without your knowledge.

In a way, you probably already knew this. A while back you searched for, say, a digital camera on the John Lewis site. And then you noticed that wherever you went on the web after that John Lewis ads for cameras kept appearing on the site you were visiting. What you were witnessing was the output of a multibillion-dollar industry that operates below the surface of the web. Think of it as the hidden wiring of our networked world. And what it does is track you wherever you go online…

Read on

In a national surveillance state, privacy is seen as “a luxury of the guilty”

Terrific piece by Andrew O’Hagan on Edward Snowden and Glenn Greenwald in the London Review of Books.

Sample:

Surveillance in the UK is an implicitly sanctioned habit that has smashed the moral framework of journalism. Protection of sources is not an adornment, not some optional garment worn only when it suits, but a basic necessity in the running of a free press in a fair democracy. Snowden proved that, but not to the satisfaction of Britain’s home affairs establishment, or the police, who like to behave as if all freedoms are optional at the point of delivery. [Alan] Rusbridger recently made the point that source confidentiality is in peril, after the revelation that the Metropolitan Police had spied on the phone records of the political editor of the Sun, Tom Newton Dunn. Snowden might have taught us to expect to be monitored, but his message, that our freedom is being diluted by a manufactured fear of the evil that surveillance ‘protects’ us from, is not being heard. Louder and clearer to many is the message that comes from the security state mind, a suspicion carried on the air like a germ, that certain kinds of journalism, like certain aspects of citizenship, are basically treacherous and a threat to good management. This germ has infected society to such a degree that people don’t notice, they don’t mind, and a great many think it not only permissible but sensible and natural, in a culture of ‘threat’, to imagine that privacy is merely a luxury of the guilty.

And this:

The first thing that amazed me about Julian Assange was how fearful he was – and how right, as it turned out – about the internet being used as a tool to remove our personal freedom. That surprised me, because I’d naively assumed that all hackers and computer nerds were in love with the net. In fact, the smarter ones were suspicious of it and understood all along that it could easily be abused by governments and corporations. The new technology would offer the chance of mass communication and networking like never before, but lurking in all those servers and behind all those cameras was a sinister, surveilling machine of ever growing power. The US government sought omniscience – ‘a system that has as its goal the complete elimination of electronic privacy worldwide’ – and showed by such actions that it considers itself above the prospectus set out in its own constitution. The leaders of the NSA said, ‘collect it all,’ and the people put up with it.

So who still believes that collecting metadata is harmless?

Interesting snippet in the latest newsletter from the Open Rights Group:

It was revealed last week that the Met police accessed the telephone records of The Sun’s Political Editor, Tom Newton Dunn, using a RIPA request.

The case should end any discussion about whether or not metadata reveals anything personal about us: Newton Dunn’s calls and when and where they were received, were seen as enough to identify a whistleblower, who contacted him over the Plebgate scandal.

Journalistic privilege, protected by the Police and Criminal Evidence Act, was circumvented by the use of RIPA. Newton Dunn was not even aware that his records had been accessed until the Met published their report into the Plebgate affair.

When DRIP was announced, Newton Dunn wrote in The Sun, that the new powers would give MI5 and cops, “crucial access to plotters’ mobile phone records”. UK public authorities use RIPA over 500,000 a year to access private data. The police refused to answer questions as to how many times they have have accessed journalists’ data. When this is happening without our knowledge, we cannot ignore the threat to our civil liberties that data retention poses.

The interesting bit is the fact that the metadata were sufficient to identify a whistleblower. We all knew that, of course, but the official line is still that bulk collection of metadata does not infringe on privacy.

Dave Eggers has seen the future. Well, a possible future anyway…

Yesterday’s Observer column.

Fifteen months have passed since Edward Snowden began to explain to us how our networked world works. During that time there has been much outrage, shock, horror, etc expressed by the media and the tech industry. So far, so predictable. What is much more puzzling is how relatively relaxed the general public appears to be about all this. In Britain, for example, opinion polling suggests that nearly two thirds of the population think that the kind of surveillance revealed by Snowden is basically OK.

To some extent, the level of public complacency/concern is culturally determined. Citizens of Germany, for example…

Read on

Web services are ‘free’, which is why we’re all in chains

This morning’s Observer column.

‘Be careful what you wish for,” runs the adage. “You might just get it.” In the case of the internet, or, at any rate, the world wide web, this is exactly what happened. We wanted exciting services – email, blogging, social networking, image hosting – that were “free”. And we got them. What we also got, but hadn’t bargained for, was deep, intensive and persistent surveillance of everything we do online.

We ought to have known that it would happen. There’s no such thing as a free lunch, after all…

Read on

TOR, Taylor Swift and breaking the Kafkaesque spiral

target

Photo cc https://secure.flickr.com/photos/comedynose/7865159650

Ever since the Snowden revelations began I’ve been arguing that Kafka is as good a guide to our surveillance crisis as is Orwell. The reason: one of the triggers that prompts the spooks to take an interest in someone is if that person is using serious tools to protect their privacy. It’s like painting a target on your back.

So if you use PGP to encrypt your email, or TOR for anonymous browsing, then you are likely to be seen as someone who warrants more detailed surveillance. After all, if you’ve nothing to hide… etc.

And there’s no way you would know that you had been selected for special treatment. This sounds like a situation that Kafka would recognise.

Until the other day, I couldn’t think of a way out of this vicious cycle. And then I came on reports (e.g. here) that a musician of whom I’d never heard — electronic music artist Aphex Twin — had announced the details of his new album on a site only accessible through Tor.

This resulted in the page attracting 133,000 views in little over 24 hours. This is within the limits of what TOR can currently handle, but Tor’s executive director, Andrew Lewman, worries that a more mainstream artist could break the system in its current state.

“If tomorrow, Taylor Swift said ‘to all my hundreds of millions of fans, go to this [Tor] address’, it would not work well. We’re into the millions now, and we have a few companies saying ‘we want to put Tor as a privacy mode in our premier products, can you handle the scale of 75-100m devices of users’, and right now the answer is no, we can’t. Not daily.”

This sounds like — and is — a problem. But it’s also an opportunity. Because what we need is for encrypted email and anonymous browsing to become the norm so that the spooks can’t argue that only evil people would resort to using such tools.

And here’s where Aphix Twin and Taylor Swift come in. They have the power to kickstart the mainstreaming of TOR — to make it normal. Of course for that to be effective it means that TOR has to be boosted and expanded and securely funded. Just as the big Internet companies have finally realised that they have to chip in and support, for example, the OpenSSL project, so they should now chip in to help build the infrastructure that would enable TOR to become the default was we all did web browsing.

Can Google really keep our email private?

This morning’s Observer column.

So Google has decided to provide end-to-end encryption for any of its Gmail users who wants it. One could ask “what took you so long?” but that would be churlish. (Some of us were unkind enough to suspect that the reluctance might have been due to, er, commercial considerations: after all, if Gmail messages are properly encrypted, then Google’s computers can’t read the content in order to decide what ads to display alongside them.) But let us be charitable and thankful for small mercies. The code for the service is out for testing and won’t be made freely available until it’s passed the scrutiny of the geek community, but still it’s a significant moment, for which we have Edward Snowden to thank.

The technology that Google will use is public key encryption, and it’s been around for a long time and publicly available ever since 1991, when Phil Zimmermann created PGP (which stands for pretty good privacy)…

Read on

LATER Email from Cory Doctorow:

Wanted to say that I think it’s a misconception that Goog can’t do targeted ads alongside encrypted email. Google knows an awful lot about Gmail users: location, browsing history, clicking history, search history. It can also derive a lot of information about a given email from the metadata: sending, CC list, and subject line. All of that will give them tons of ways to target advertising to Gmail users – — they’re just subtracting one signal from the overall system through which they make their ad-customization calculations.

So the cost of not being evil is even lower than I had supposed!

STILL LATER
This from Business Insider:

Inside the code for Google’s End-to-End email encryption extension for Chrome, there’s a message that should sound very familiar to the NSA: “SSL-added-and-removed-here-;-)”

Followers of this blog will recognise this as quote from a slide leaked by Edward Snowden.

google-cloud-exploitation1383148810

This comes from a slide-deck about the ‘Muscular’ program (who thinks up these daft names?), which allowed Britain’s GCHQ intelligence service and the NSA to pull data directly from Google servers outside of the U.S. The cheeky tone of the slide apparently enraged some Google engineers, which I guess explains why a reference to it resides in the Gmail encryption code.

Yay! Gmail to get end-to-end encryption

This has been a long time coming — properly encrypted Gmail — but it’s very welcome. Here’s the relevant extract from the Google security blog:

Today, we’re adding to that list the alpha version of a new tool. It’s called End-to-End and it’s a Chrome extension intended for users who need additional security beyond what we already provide.

“End-to-end” encryption means data leaving your browser will be encrypted until the message’s intended recipient decrypts it, and that similarly encrypted messages sent to you will remain that way until you decrypt them in your browser.

While end-to-end encryption tools like PGP and GnuPG have been around for a long time, they require a great deal of technical know-how and manual effort to use. To help make this kind of encryption a bit easier, we’re releasing code for a new Chrome extension that uses OpenPGP, an open standard supported by many existing encryption tools.

However, you won’t find the End-to-End extension in the Chrome Web Store quite yet; we’re just sharing the code today so that the community can test and evaluate it, helping us make sure that it’s as secure as it needs to be before people start relying on it. (And we mean it: our Vulnerability Reward Program offers financial awards for finding security bugs in Google code, including End-to-End.)

Once we feel that the extension is ready for primetime, we’ll make it available in the Chrome Web Store, and anyone will be able to use it to send and receive end-to-end encrypted emails through their existing web-based email provider.

We recognize that this sort of encryption will probably only be used for very sensitive messages or by those who need added protection. But we hope that the End-to-End extension will make it quicker and easier for people to get that extra layer of security should they need it.

Google privacy ruling: the thin end of a censorship wedge?

This morning’s Observer column.

Sooner or later, every argument about regulation of the internet comes down to the same question: is this the thin end of the wedge or not? We saw a dramatic illustration last week when the European court of justice handed down a judgment on a case involving a Spanish lawyer, one Mario Costeja González, who objected that entering his name in Google’s search engine brought up embarrassing information about his past (that one of his properties had been the subject of a repossession)…

Read on

LATER

Three interesting — and usefully diverse — angles on the ECJ decision.

  • Daithi Mac Sitigh points out that the decision highlights the tensions between EU and US law. “This is particularly significant”, he says, “given that most of the major global players in social networking and e-commerce operate out of the US but also do a huge amount of business in Europe.”

Google’s first line of defence was that its activities were not subject to the Data Protection Directive. It argued that its search engine was not a business carried out within the European Union. Google Spain was clearly subject to EU law, but Google argued that it sells advertising rather than running a search engine.

The court was asked to consider whether Google might be subject to the Directive under various circumstances. A possible link was the use of equipment in the EU, through gathering information from EU-based web servers or using relevant domain names (such as google.es). Another suggestion was that a case should be brought at its “centre of gravity”, taking into account where the people making the requests to delete data have their interests.

But the court never reached these points. Instead, it found the overseas-based search engine and the Spain-based seller of advertising were “inextricably linked”. As such, Google was found to be established in Spain and subject to the directive.

The message being sent was an important one. Although this ruling is specific to the field of data protection, it suggests that if you want to do business in the EU, a corporate structure that purports to shield your activities from EU law will not necessarily protect you from having to comply with local legislation. This may explain the panicked tone of some of the reaction to the decision.

  • In an extraordinary piece, “Right to Forget a Genocide”, Zeynep Tufekci muses about how (Belgian) colonial imposition of ID cards on Rwandan citizens was instrumental in facilitating genocide.

It may seem like an extreme jump, from drunken adolescent photos to genocide and ethnic cleansing, but the shape, and filters, of a society’s memory is always more than just about individual embarrassment or advancement. What we know about people, and how easily we can identify or classify them, is consequential far beyond jobs and dates, and in some contexts may make the difference between life and death.

“Practical obscurity”—the legal term for information that was available, but not easily—has died in most rich countries within just about a decade. Court records and criminal histories, which were only accessible to the highly-motivated, are now there at the click of a mouse. Further, what is “less obscure” has greatly expanded: using our online data, algorithms can identify information about a person, such as sexual orientation and political affiliation, even if that person never disclosed them.

In that context, take Rwanda, a country many think about in conjunction with the horrific genocide 20 years ago during which more than 800,000 people were killed—in just about one hundred days. Often, stories of ethnic cleansing and genocide get told in a context of “ancient hatreds,” but the truth of it is often much uglier, and much less ancient. It was the brutal colonizer of Rwanda, Belgium, that imposed strict ethnicity-based divisions in a place where identity tended to be more fluid and mixed. Worse, it imposed a national ID system that identified each person as belonging to Hutu, Tutsi or Twa and forever freezing them in that place. [For a detailed history of the construction of identity in Rwanda read this book, and for the conduct of colonial Belgium, Rwanda’s colonizer, read this one.]

Few years before the genocide, some NGOs had urged that Rwanda “forget” ethnicity, erasing them from ID cards.

They were not listened to.

During the genocide, it was those ID cards that were asked for at each checkpoint, and it was those ID cards that identified the Tutsis, most of whom were slaughtered on the spot. The ID cards closed off any avenue of “passing” a checkpoint. Ethnicity, a concept that did not at all fit neatly into the region’s complex identity configuration, became the deadly division that underlined one of the 20th century’s worst moments. The ID cards doomed and fueled the combustion of mass murder.

  • Finally, there’s a piece in Wired by Julia Powles arguing that “The immediate reaction to the decision has been, on the whole, negative. At best, it is reckoned to be hopelessly unworkable. At worst, critics pan it as censorship. While there is much to deplore, I would argue that there are some important things we can gain from this decision before casting it roughly aside.”

What this case should ideally provoke is an unflinching reflection on our contemporary digital reality of walled gardens, commercial truth engines, and silent stewards of censorship. The CJEU is painfully aware of the impact of search engines (and ‘The’ search engine, in particular). But we as a society should think about the hard sociopolitical problems that they pose. Search engines are catalogues, or maps, of human knowledge, sentiments, joys, sorrows, and venom. Silently, with economic drivers and unofficial sanction, they shape our lives and our interactions.

The fact of the matter here is that if there is anyone that is up to the challenge of respecting this ruling creatively, Google is. But if early indications are anything to go by, there’s a danger that we’ll unwittingly save Google from having to do so, either through rejecting the decision in practical or legal terms; through allowing Google to retreat “within the framework of their responsibilities, powers and capabilities” (which could have other unwanted effects and unchecked power, by contrast with transparent legal mechanisms); or through working the “right to be forgotten” out of law through the revised Data Protection Regulation, all under the appealing but ultimately misguided banner of preventing censorship.

There is, Powles argues, a possible technical fix for this — implementation of a ‘right to reply’ in search engine results.

An all-round better solution than “forgetting”, “erasure”, or “take-down”, with all of the attendant issues with free speech and the rights of other internet users, is a “right to reply” within the notion of “rectification”. This would be a tech-enabled solution: a capacity to associate metadata, perhaps in the form of another link, to any data that is inaccurate, out of date, or incomplete, so that the individual concerned can tell the “other side” of the story.

We have the technology to implement such solutions right now. In fact, we’ve done a mock-up envisaging how such an approach could be implemented.

Search results could be tagged to indicate that a reply has been lodged, much as we see with sponsored content on social media platforms. Something like this, for example:

Forgotten

(Thanks to Charles Arthur for the Tufekci and Powles links.)