Privacy: who needs it? Er, Zuckerberg & Co

Who said irony was dead? The tech zillionaires are so blasé about how their users are relaxed about privacy and what is quaintly called “sharing”. But they are not at all blasé when it comes to sharing information about themselves. Google’s Exec Chairman, Eric Schmidt, for example, believes that “privacy is dead”, but went apeshit when some enterprising journalist dug up lots of personal information about him simply by using, er, Google.

And then there’s young Zuckerberg, the Facebook boss, who is likewise relaxed about other people’s privacy, but paranoid about his own. See, for example, this Forbes report on his need to buy up an entire neighbourhood block in palo Alto to ensure that he isn’t overlooked:

So much for Zuckerberg only making a big digital footprint. Now the online empire maker owns nearly an entire neighborhood block, just because he can.

According to property records, the Facebook CEO has spent $30 million over the past year buying the pricy homes of four of his neighbors. It’s within his right, and within his budget, especially with Facebook stock finally starting to march up in value after its controversial and lackluster IPO.

Now the NYT is reporting that he’s updating a house in San Francisco, where even he might not be able to persuade his neighbours to clear out. But builders and tradesmen working on this nouveau palace find that they have to sign Non-Disclosure Agreements lest the world should know which kind of bidet the infant zillionaire favours.

Getting to bedrock

This morning’s Observer column:

The implication of these latest revelations is stark: the capabilities and ambitions of the intelligence services mean that no electronic communications device can now be regarded as trustworthy. It’s not only your mobile phone that might betray you: your hard disk could harbour a snake in the grass, too.

No wonder Andy Grove, the former boss of Intel, used to say that “only the paranoid survive” in the technology business. Given that we have become totally dependent on his industry’s products, that knowledge may not provide much consolation. But we now know where we stand. And we have Edward Snowden to thank for that.

Read on

Straw and Rifkind had nothing to hide, but…

This morning’s Observer column:

The really sinister thing about the nothing-to-hide argument is its underlying assumption that privacy is really about hiding bad things. As the computer-security guru Bruce Schneier once observed, the nothing-to-hide mantra stems from “a faulty premise that privacy is about hiding a wrong”. But surveillance can have a chilling effect by inhibiting perfectly lawful activities (lawful in democracies anyway) such as free speech, anonymous reading and having confidential conversations.

So the long-term message for citizens of democracies is: if you don’t want to be a potential object of attention by the authorities, then make sure you don’t do anything that might make them – or their algorithms – want to take a second look at you. Like encrypting your email, for example; or using Tor for anonymous browsing. Which essentially means that only people who don’t want to question or oppose those in power are the ones who should be entirely relaxed about surveillance.

We need to reboot the discourse about democracy and surveillance. And we should start by jettisoning the cant about nothing-to-hide. The truth is that we all have things to hide – perfectly legitimately. Just as our disgraced former foreign secretaries had.

Read on

ISC Chairman had “nothing to hide” but still got into trouble

So Sir Malcolm Rifkind has fallen on his sword after a journalistic sting operation recorded him apparently touting for work from a fake Chinese company that was supposedly wanting him to join its advisory board. The other former Foreign Secretary, Jack Straw, was similarly embarrassed after he was surreptitiously recorded bragging about the access that his status as a former senior minister granted him. Both men protested vigorously that they had done nothing wrong, which may well be true, at least in the sense that they were adhering to the letter of the rules for public representatives.

What’s interesting about Rifkind’s fall is that he used to be an exponent of the standard mantra — “if you have nothing to hide then you have nothing to fear” from bulk surveillance. Both men claim that they had done nothing wrong, but at the same time it’s clear that they have been grievously embarrassed by public exposure of activities that they wanted to keep private. In that sense, they are in the same boat as most citizens. We all do harmless things that we nevertheless regard as private matters which are none of the government’s business. That’s what privacy is all about.

Thinking of Googling for health information? Think again.

Interesting video by Tim Libert, summarising the results of some research he did on the way health information sites (including those run by government agencies) covertly pass information about health-related searches to a host of commercial companies. Libert is a researcher at the University of Pennsylvania. He built a program called webXray to analyze the top 50 search results for nearly 2,000 common diseases (over 80,000 pages total). He found that no fewer that 91% of the pages made third-party requests to outside companies. So if you search for “cold sores,” for instance, and click the WebMD “Cold Sores Topic Overview” link, the site is passing your request for information about the disease along to “one or more (and often many, many more) other corporations”.

According to Libert’s research (Communications of the ACM, Vol. 58 No. 3, Pages 68-77), about 70% of the time, the data transmitted “contained information exposing specific conditions, treatments, and diseases.”

So think twice before consulting Dr Google. Especially if you think you might have a condition that might affect your insurance rating.

The Telescreen is here

Thinking about getting a ‘smart’ Samsung TV? Think again.

Smart_TV

Thanks to Hannes Sjoblad for the tweet.

Footnote: In Orwell’s 1984 there was a ‘telescreen’ in Winston’s apartment.

“Any sound that Winston made, above the level of a very low whisper, would be picked up by it, moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork.”

Due warning

“FOR PUBLIC SAFETY REASONS, THIS EMAIL HAS BEEN INTERCEPTED BY YOUR
GOVERNMENT AND WILL BE RETAINED FOR FUTURE ANALYSIS.”

Signature line on a friend’s email messages.

What David Cameron doesn’t get: the difference between privacy and secrecy

My colleague Julia Powles has a terrific essay in Wired on the implications of, and fallout from, the Charlie Hebdo massacre, in which she says this:

Cameron claims that there should be “no safe spaces for terrorists to communicate”. What he expects in technical and legal terms is unclear, but the sentiment is stark: no safe spaces for “them”, means none for us. Security is cast as the ultimate law and first priority, while privacy is something for bad people to hide bad things. In truth, privacy is fundamental to all of us, individually and collectively. It is the foundation of trust, relationships, and intellectual freedom. It is a core tenet of a free and healthy society — security’s ally, not its enemy.

It’s strange how the political establishment in most democracies now seem unable to distinguish between secrecy and privacy. Privacy — as Cory Doctorow observed last week on Radio 4’s Start the Week programme — is the ability to control what other people know about you. It’s the state of being unobserved. Secrecy is the act of keeping things hidden for various reasons, some of which may be legitimate — and some conceivably not. We are all entitled to privacy — it’s a human right. Secrecy is a different thing altogether.

She goes on to remind readers that Cameron’s political assertions

are propped up by a formidable line-up of security officials from MI5, MI6, and the ISC, who have been notably more vocal in the last two weeks than at any moment in the last two years. They echo the tone set by the GCHQ director and Metropolitan police commissioner in November. It is only if we can get at everybody’s communications data, they claim, that we can tackle the terrorist problem. But mass data collection, the necessary precursor to recent and proposed laws, can be shown mathematically to make it more difficult to catch terrorists, plus it has a very significant and irrecoverable environmental cost. It is in clear breach of human rights. It also creates unnecessary, unwanted, and costly data storage — and, with it, new vulnerabilities to malevolent actors that far outnumber plausible terrorist threats. What works, by contrast, is well-resourced, targeted intelligence, complemented by strategies directed at mitigating the causes of disaffection and social unrest.

Well worth reading in full.

So what will it take to wake people up?

At dinner last night I had a long talk with one of my Masters students who is as baffled as I am about why people seem to be so complacent about online surveillance. This morning a colleague sent me a link to this TEDx talk by Mikko Hypponen, a well known Finnish security expert. It’s a terrific lecture, but one part of it stood out especially for me in the context of last night’s conversation. It concerned an experiment Hypponen and his colleagues ran in London, where they set up a free wi-fi hot-spot that anyone could use after they had clicked to accept the terms & conditions under which the service was offered. One of the terms was this:

First_born_child_EULA

Every user — every user! — clicked ‘Accept’.

Why ‘cybersecurity’ is such a flawed term

In a sentence: it lumps three very different things — crime, espionage and warfare — under a single heading. And, as I tried to point out in yesterday’s Observer column, instead of making cyberspace more secure many of the activities classified as ‘cyber security’ make it less so.

Bruce Schneier has a thoughtful essay on the subject.

Last week we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

Yep. Remember that the ostensible mission of these companies is to make cyberspace more secure. By keeping quiet about the Regin threat they did exactly the opposite. So, as Schneier concludes,

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.