The significance of WhatsApp encryption

This morning’s Observer column:

In some ways, the biggest news of the week was not the Panama papers but the announcement that WhatsApp was rolling out end-to-end encryption for all its 1bn users. “From now on,” it said, “when you and your contacts use the latest version of the app, every call you make, and every message, photo, video, file and voice message you send, is end-to-end encrypted by default, including group chats.”

This is a big deal because it lifts encryption out of the for-geeks-only category and into the mainstream. Most people who use WhatsApp wouldn’t know a hash function if it bit them on the leg. Although strong encryption has been available to the public ever since Phil Zimmermann wrote and released PGP (Pretty Good Privacy) in 1991, it never realised its potential because the technicalities of setting it up for personal use defeated most lay users.

So the most significant thing about WhatsApp’s innovation is the way it renders invisible all the geekery necessary to set up and maintain end-to-end encryption…

Read on

Why the Apple vs. the FBI case is important

This morning’s Observer column:

No problem, thought the Feds: we’ll just get a court order forcing Apple to write a special version of the operating system that will bypass this security provision and then download it to Farook’s phone. They got the order, but Apple refused point-blank to comply – on several grounds: since computer code is speech, the order violated the first amendment because it would be “compelled speech”; because being obliged to write the code amounted to “forced labour”, it would also violate the fifth amendment; and it was too dangerous because it would create a backdoor that could be exploited by hackers and nation states and potentially put a billion users of Apple devices at risk.

The resulting public furore offers a vivid illustration of how attempting a reasoned public debate about encryption is like trying to discuss philosophy using smoke signals. Leaving aside the purely clueless contributions from clowns like Piers Morgan and Donald Trump, and the sanctimonious platitudes from Obama downwards about “no company being above the law”, there is an alarmingly widespread failure to appreciate what is at stake here. We are building a world that is becoming totally dependent on network technology. Since there is no possibility of total security in such a world, then we have to use any tool that offers at least some measure of protection, for both individual citizens and institutions. In that context, strong encryption along the lines of the stuff that Apple and some other companies are building into their products and services is the only game in town.

Read on

Living with the surveillance state

Point made well by Bill Keller in a thoughtful column:

The danger, it seems to me, is not surveillance per se. We have already decided, most of us, that life on the grid entails a certain amount of intrusion. Nor is the danger secrecy, which, as Posner notes, “is ubiquitous in a range of uncontroversial settings,” a promise the government makes to protect “taxpayers, inventors, whistle-blowers, informers, hospital patients, foreign diplomats, entrepreneurs, contractors, data suppliers and many others.”

The danger is the absence of rigorous, independent regulation and vigilant oversight to keep potential abuses of power from becoming a real menace to our freedom. The founders created a system of checks and balances, but the safeguards have not kept up with technology. Instead, we have an executive branch in a leak-hunting frenzy, a Congress that treats oversight as a form of partisan combat, a political climate that has made “regulation” an expletive and a public that feels a generalized, impotent uneasiness. I don’t think we’re on a slippery slope to a police state, but I think if we are too complacent about our civil liberties we could wake up one day and find them gone — not in a flash of nuclear terror but in a gradual, incremental surrender.

Let’s turn the TalkTalk hacking scandal into a crisis

Yesterday’s Observer column:

The political theorist David Runciman draws a useful distinction between scandals and crises. Scandals happen all the time in society; they create a good deal of noise and heat, but in the end nothing much happens. Things go back to normal. Crises, on the other hand, do eventually lead to structural change, and in that sense play an important role in democracies.

So a good question to ask whenever something bad happens is whether it heralds a scandal or a crisis. When the phone-hacking story eventually broke, for example, many people (me included) thought that it represented a crisis. Now, several years – and a judicial enquiry – later, nothing much seems to have changed. Sure, there was a lot of sound and fury, but it signified little. The tabloids are still doing their disgraceful thing, and Rebekah Brooks is back in the saddle. So it was just a scandal, after all.

When the TalkTalk hacking story broke and I heard the company’s chief executive say in a live radio interview that she couldn’t say whether the customer data that had allegedly been stolen had been stored in encrypted form, the Runciman question sprang immediately to mind. That the boss of a communications firm should be so ignorant about something so central to her business certainly sounded like a scandal…

Read on

LATER Interesting blog post by Bruce Schneier. He opens with an account of how the CIA’s Director and the software developer Grant Blakeman had their email accounts hacked. Then,

Neither of them should have been put through this. None of us should have to worry about this.

The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.

It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.

He’s right. Only when the costs of insecurity exceed the costs of doing it right will companies invest properly in it. And governments can fix that, quickly, by changing the law. For once, this is something that’s not difficult to do, even in a democracy.

So even Apple can’t break into my iPhone?

Hmmm… I wonder. This from SiliconBeat:

Apple says it would be burdensome — and mostly impossible — for it to unlock people’s iPhones upon the request of law enforcement.

In a legal filing this week, the iPhone maker answered a question posed by U.S. Magistrate Judge James Orenstein, who had been urged by federal prosecutors to force Apple to unlock an iPhone. Orenstein said last week that he would defer ruling until Apple let him know whether it’s feasible to bypass an iPhone’s passcode.

Here’s the meat of Apple’s response, which comes amid law enforcement officials’ growing frustration over tech companies’ increased privacy and security efforts:

“In most cases now and in the future, the government’s requested order would be substantially burdensome, as it would be impossible to perform. For devices running iOS 8 or higher, Apple would not have the technical ability to do what the government requests—take possession of a password protected device from the government and extract unencrypted user data from that device for the government. Among the security features in iOS 8 is a feature that prevents anyone without the device’s passcode from accessing the device’s encrypted data. This includes Apple.”

Want to network your Jeep Cherokee? Try smoke signals: they’re safer

This morning’s Observer column:

‘‘Jeep Cherokee hacked in demo; Chrysler owners urged to download patch”, was the heading on an interesting story last week. “Just imagine,” burbled the report, “one moment you’re listening to some pleasant pop hits on the radio, and the next moment the hip-hop station is blasting at full volume – and you can’t change it back! This is just one of the exploits of Charlie Miller and Chris Valasek … when they hacked into a Jeep Cherokee. They were able to change the temperature of the air conditioning, turn on the windshield wipers and blast the wiper fluid to blur the glass, and even disable the brakes, turn off the transmission, take control of the steering, and display their faces onto the dashboard’s screen.”

In some ways, this was an old story: cars have been largely governed by electronics since the 1980s, and anyone who controls the electronics controls the car. But up to now, the electronics have not been connected to the internet. What makes the Jeep Cherokee story interesting is that its electronics were hacked via the internet. And that was possible because internet connectivity now comes as a consumer option – Uconnect – from Chrysler.

If at this point you experience a sinking feeling, then join the club. So let us return to first principles for a moment…

Read on

LATER: Chrysler has issued a recall for 1.4 million vehicles as a result of the hacking revelations.

Common sense about hacking

From the Economist blog:

FOR companies, there are two strategies for dealing with people who uncover flaws in their IT security: a right way and a wrong way. Our leader on hacking this week tells of the approach that Volkswagen took when a group of academics informed it that they had uncovered a vulnerability in a remote-car-key system: the firm slapped a court injunction on them. It is difficult to conceive of an approach more likely to be counter-productive.

United Airlines, it seems, has a far more enlightened attitude. It has just awarded two hackers 1m air miles each after they managed to spot security weak spots in its website. The move is part of a scheme called “bug bounty”, in which hackers are incentivised to contact the company with security flaws, rather than post them online. This approach is common at Silicon Valley firms, and makes just as much sense for old-fashioned industries too. Pound to a penny, there are nefarious types out there trying to break into most big companies’ IT systems. Encouraging “white-hat” hackers to uncover flaws, and then rewarding them for not revealing them to the wider world, may sit uncomfortably with people’s sense of fairness. However, if it gives firms time to fix the problem, in pragmatic terms the benefit is obvious.

Yep.

Old cryptopanic in new iBottles

Repeat after me:

A ‘backdoor’ for law enforcement is a deliberately introduced security vulnerability, a form of architected breach.

Or, if you’d like the more sophisticated version

It requires a system to be designed to permit access to a user’s data against the user’s wishes, and such a system is necessarily less secure than one designed without such a feature. As computer scientist Matthew Green explains in a recent Slate column (and, with several eminent colleagues, in a longer 2013 paper) it is damn near impossible to create a security vulnerability that can only be exploited by “the good guys.” Activist Eva Galperin puts the point pithily: “Once you build a back door, you rarely get to decide who walks through it.” Even if your noble intention is only to make criminals more vulnerable to police, the unavoidable cost of doing so in practice is making the overwhelming majority of law-abiding users more vulnerable to criminals.

Bruce Schneier’s next book

Title: Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

Publisher: WW Norton

Publication date: March 9, 2015

Table of Contents

Part 1: The World We’re Creating
Chapter 1: Data as a By-Product of Computing
Chapter 2: Data as Surveillance
Chapter 3: Analyzing our Data
Chapter 4: The Business of Surveillance
Chapter 5: Government Surveillance and Control
Chapter 6: Consolidation of Institutional Surveillance

Part 2: What’s at Stake
Chapter 7: Political Liberty and Justice
Chapter 8: Commercial Fairness and Equality
Chapter 9: Business Competitiveness
Chapter 10: Privacy
Chapter 11: Security

Part 3: What to Do About It
Chapter 12: Principles
Chapter 13: Solutions for Government
Chapter 14: Solutions for Corporations
Chapter 15: Solutions for the Rest of Us
Chapter 16: Social Norms and the Big Data Trade-Off

Something to be pre-ordered, methinks.

Neoliberalism’s revolving door

Well, guess what? The former Head of the NSA has found a lucrative retirement deal.

As the four-star general in charge of U.S. digital defenses, Keith Alexander warned repeatedly that the financial industry was among the likely targets of a major attack. Now he’s selling the message directly to the banks.

Joining a crowded field of cyber-consultants, the former National Security Agency chief is pitching his services for as much as $1 million a month. The audience is receptive: Under pressure from regulators, lawmakers and their customers, financial firms are pouring hundreds of millions of dollars into barriers against digital assaults.

Alexander, who retired in March from his dual role as head of the NSA and the U.S. Cyber Command, has since met with the largest banking trade groups, stressing the threat from state-sponsored attacks bent on data destruction as well as hackers interested in stealing information or money.

“It would be devastating if one of our major banks was hit, because they’re so interconnected,” Alexander said in an interview.

Nice work if you can get it. First of all you use your position in the state bureaucracy to scare the shit out of banks. Then you pitch your services as the guy who can help them escape Nemesis.