Yesterday I participated in a panel discussion on surveillance in the Cambridge Festival of Ideas. My fellow-panellists were the anthropologist Caroline Humphrey, the computer scientist Jon Crowcroft and John Rust, the Director of the university’s Psychometrics Centre. The session was ably chaired by Charles Arthur, who until recently was the Technology Editor of the Guardian and still writes regularly for the paper.
We each gave a short talk and then there was a fairly lively Q&A session with a large audience. Here are the notes for my talk.
Although this is ostensibly about technology, in my opinion it is actually about politics, and therefore about democracy. Here’s why.
Whatever one thinks about Edward Snowden, he deserves respect for revealing to the general public the hidden reality of our networked age — which is that “surveillance is the business model of the Internet” as Bruce Schneier puts it. The spooks do intensive surveillance without our consent (and, until recently, without most of us knowing.) The companies (Google, Facebook et al ) claim that they do it with our consent (all those EULAs we clicked ‘Agree’ to in the distant past) in return for the ‘free’ services that they provide and we apparently crave. What Snowden has shown is the extent to which we have been sleepwalking into a nightmare.
Because I think that the problem is, ultimately, political in origin and nature, demonising the agencies doesn’t address the problem. If they are collecting the whole goddam haystack (and they are), then it’s because of the pressure placed on them by their political masters — the ‘war on terror’, the political pressure to ‘join the dots’ and the injunction (e.g. from Vice President Cheney after 9/11) to ensure that “this must never happen again”. In that sense, the NSA, GCHQ etc. are just rational actors trying to meet impossible political demands.
If there is going to be any way out of this nightmare, it is effective, muscular, publicly-credible, and technologically-informed democratic oversight. To date, all we have had since 9/11 is what I call oversight theatre. So the existential question for democracies is whether it is possible to do oversight properly and credibly?
One of the most striking aspects of this new ‘national security’ syndrome is the absence of any rational debate about both its effectiveness (Does all this haystack-collecting actually work in terms of preventing major terrorist outrages?) and its cost-effectiveness (Do we get value for money? And how would we know?). These questions seem to be currently off-limits in our democracies. So we have endless debates about the worth and cost-effectiveness of, say, the proposed High-Speed rail line from London to Birmingham, but no such debate about whether the huge sums spent on the NSA or GCHQ are actually delivering value for money. In that context, there’s an interesting paper from the CATO Institute which makes this point well. “Terrorism”, it says, “is a hazard to human life,
“and it should be dealt with in a manner similar to that applied to other hazards—albeit with an appreciation for the fact that terrorism often evokes extraordinary fear and anxiety. Although allowing emotion to overwhelm sensible analysis is both understandable and common among ordinary people, it is inappropriate for officials charged with keeping them safe. To do so is irresponsible, and it costs lives.
Risk analysis is an aid to responsible decision making that has been developed, codified, and applied over the past few decades—or in some respects centuries. We deal with four issues central to that approach and apply them to the hazard presented by terrorism: the cost per saved life, acceptable risk, cost–benefit analysis, and risk communication. We also assess the (very limited) degree to which risk analysis has been coherently applied to counterterrorism efforts by the U.S. government in making or evaluating decisions that have cost taxpayers hundreds of billions of dollars.
At present, the process encourages decision making that is exceptionally risk averse. In addition, decision makers appear to be overly fearful about negative reactions to any relaxations of security measures that fail to be cost-effective and also about the consequences of failing to overreact.
If other uses of the funds available would more effectively save lives, a government obliged to allocate money in a manner that best benefits public safety must explain why spending billions of dollars on security measures with very little proven benefit is something other than a reckless waste of resources.
Our governments have not done this and so far show no inclination to change their ways.
What are the long-term implications of comprehensive surveillance. What happens to human behaviour in a networked goldfish bowl? Psychologists have shown that people’s behaviour changes when they know they are being watched. What happens to entire societies when intensive surveillance becomes absolutely ubiquitous? Here the experience of East Germans or the wretched citizens of North Korea become relevant.
hen there’s the mystery of public acceptance of surveillance — at least in some societies. One of the things that really baffles me is why have the Snowden revelations not caused more disquiet? Which of course then raises the question of whether there is any real hope of ameliorating the situation in the absence of massive public disquiet? Democracies only change course when there’s public sense of a major crisis. My gloomy conclusion is that not much is going to change. Governments and the security services will see little reason for giving ground on this.
I am also puzzled about why there is not more scepticism of the philosophical underpinnings of the “if you have nothing to hide then you have nothing to fear” argument. This seems to me to be pure cant because what it means is that the State is asserting the right to surveill all of your communications. And the contention that bulk ‘collection’ does not infringe your privacy is bogus for the same reason that Google’s claim that it doesn’t read your mail is bogus: it overlooks the capabilities of the digital technology that both Google and the agencies employ. For without automated pattern-matching and machine learning the security agencies would not be be able to ‘select’ targets for what legal pedants regard as true ‘collection’, namely inspection by a human agent. Related to this is the fact that if, for perfectly legitimate reasons, you take positive steps to protect your communications from official (or any other kind of) snooping by encrypting your email or by using Tor for anonymous browsing, then that is seen as grounds for selecting you for further investigation. So protecting yourself from state surveillance for perfectly innocent reasons becomes grounds for suspicion. This not so much Orwellian as Kafkaesque.
Privacy is both an individual and a social good. Yet we treat it as if it were exclusively a private matter. So an individual can ‘trade’ some of her privacy to Google in return for ‘free’ services like Gmail. Gmail then (machine-) reads her mail in order to target ads at her. But if she writes to someone who has not signed up to Gmail and that person writes back, then his/her email is also read by Google, and his/her privacy has been eroded. Jon Crowcroft knows a researcher who will blacklist anybody who writes to him using a webmail address for that reason.
And then there’s the ultimate question: what will be the political response when, despite all the surveillance, the next terrorist outrage occurs? Because we will have other outrages: after all, the NSA and GCHQ did not see ISIS coming. What then? What will our politicians demand? Even more surveillance? It’s hard to see any logical end-point to this. Or at any rate, any end-point that looks good for democracy.