Collateral damage and the NSA’s stash of cyberweapons

This morning’s Observer column:

All software has bugs and all networked systems have security holes in them. If you wanted to build a model of our online world out of cheese, you’d need emmental to make it realistic. These holes (vulnerabilities) are constantly being discovered and patched, but the process by which this happens is, inevitably, reactive. Someone discovers a vulnerability, reports it either to the software company that wrote the code or to US-CERT, the United States Computer Emergency Readiness Team. A fix for the vulnerability is then devised and a “patch” is issued by computer security companies such as Kaspersky and/or by software and computer companies. At the receiving end, it is hoped that computer users and network administrators will then install the patch. Some do, but many don’t, alas.

It’s a lousy system, but it’s the only one we’ve got. It has two obvious flaws. The first is that the response always lags behind the threat by days, weeks or months, during which the malicious software that exploits the vulnerability is doing its ghastly work. The second is that it is completely dependent on people reporting the vulnerabilities that they have discovered.

Zero-day vulnerabilities are the unreported ones…

Read on

Foreign interference in voting systems is a national security issue

Good WashPo OpEd piece by Bruce Schneier on the implications of (i) Russian hacking of the DNC computer systems and (ii) the revelations about the insecurity if US voting machines:

Over the years, more and more states have moved to electronic voting machines and have flirted with Internet voting. These systems are insecure and vulnerable to attack.

But while computer security experts like me have sounded the alarm for many years, states have largely ignored the threat, and the machine manufacturers have thrown up enough obfuscating babble that election officials are largely mollified.

We no longer have time for that. We must ignore the machine manufacturers’ spurious claims of security, create tiger teams to test the machines’ and systems’ resistance to attack, drastically increase their cyber-defenses and take them offline if we can’t guarantee their security online.

Longer term, we need to return to election systems that are secure from manipulation. This means voting machines with voter-verified paper audit trails, and no Internet voting. I know it’s slower and less convenient to stick to the old-fashioned way, but the security risks are simply too great.

Apple vs. FBI ought to have gone to the Supreme Court

Today’s Observer column:

So the FBI sought a court order to compel Apple to write a special version of the operating system without this ingenious destructive mechanism – which could then be downloaded to the phone. Apple refused, on various grounds both technological and legalistic, and the stage was set – so some of us thought – for a legal battle that would go all the way to the supreme court.

In the end, it didn’t happen. The FBI bought a hack from an Israeli security company which had already found a way round the problem, called off the legal suit, and nobody got their day in front of the supremes. Which was a pity, because it means that a really important question posed by digital technology remains unresolved. Put simply, it’s this: what limits, if any, should be placed on the power of encryption technology to render citizens’ communications invisible to law enforcement and security authorities?

Read on

The significance of WhatsApp encryption

This morning’s Observer column:

In some ways, the biggest news of the week was not the Panama papers but the announcement that WhatsApp was rolling out end-to-end encryption for all its 1bn users. “From now on,” it said, “when you and your contacts use the latest version of the app, every call you make, and every message, photo, video, file and voice message you send, is end-to-end encrypted by default, including group chats.”

This is a big deal because it lifts encryption out of the for-geeks-only category and into the mainstream. Most people who use WhatsApp wouldn’t know a hash function if it bit them on the leg. Although strong encryption has been available to the public ever since Phil Zimmermann wrote and released PGP (Pretty Good Privacy) in 1991, it never realised its potential because the technicalities of setting it up for personal use defeated most lay users.

So the most significant thing about WhatsApp’s innovation is the way it renders invisible all the geekery necessary to set up and maintain end-to-end encryption…

Read on

Why the Apple vs. the FBI case is important

This morning’s Observer column:

No problem, thought the Feds: we’ll just get a court order forcing Apple to write a special version of the operating system that will bypass this security provision and then download it to Farook’s phone. They got the order, but Apple refused point-blank to comply – on several grounds: since computer code is speech, the order violated the first amendment because it would be “compelled speech”; because being obliged to write the code amounted to “forced labour”, it would also violate the fifth amendment; and it was too dangerous because it would create a backdoor that could be exploited by hackers and nation states and potentially put a billion users of Apple devices at risk.

The resulting public furore offers a vivid illustration of how attempting a reasoned public debate about encryption is like trying to discuss philosophy using smoke signals. Leaving aside the purely clueless contributions from clowns like Piers Morgan and Donald Trump, and the sanctimonious platitudes from Obama downwards about “no company being above the law”, there is an alarmingly widespread failure to appreciate what is at stake here. We are building a world that is becoming totally dependent on network technology. Since there is no possibility of total security in such a world, then we have to use any tool that offers at least some measure of protection, for both individual citizens and institutions. In that context, strong encryption along the lines of the stuff that Apple and some other companies are building into their products and services is the only game in town.

Read on

Living with the surveillance state

Point made well by Bill Keller in a thoughtful column:

The danger, it seems to me, is not surveillance per se. We have already decided, most of us, that life on the grid entails a certain amount of intrusion. Nor is the danger secrecy, which, as Posner notes, “is ubiquitous in a range of uncontroversial settings,” a promise the government makes to protect “taxpayers, inventors, whistle-blowers, informers, hospital patients, foreign diplomats, entrepreneurs, contractors, data suppliers and many others.”

The danger is the absence of rigorous, independent regulation and vigilant oversight to keep potential abuses of power from becoming a real menace to our freedom. The founders created a system of checks and balances, but the safeguards have not kept up with technology. Instead, we have an executive branch in a leak-hunting frenzy, a Congress that treats oversight as a form of partisan combat, a political climate that has made “regulation” an expletive and a public that feels a generalized, impotent uneasiness. I don’t think we’re on a slippery slope to a police state, but I think if we are too complacent about our civil liberties we could wake up one day and find them gone — not in a flash of nuclear terror but in a gradual, incremental surrender.

Let’s turn the TalkTalk hacking scandal into a crisis

Yesterday’s Observer column:

The political theorist David Runciman draws a useful distinction between scandals and crises. Scandals happen all the time in society; they create a good deal of noise and heat, but in the end nothing much happens. Things go back to normal. Crises, on the other hand, do eventually lead to structural change, and in that sense play an important role in democracies.

So a good question to ask whenever something bad happens is whether it heralds a scandal or a crisis. When the phone-hacking story eventually broke, for example, many people (me included) thought that it represented a crisis. Now, several years – and a judicial enquiry – later, nothing much seems to have changed. Sure, there was a lot of sound and fury, but it signified little. The tabloids are still doing their disgraceful thing, and Rebekah Brooks is back in the saddle. So it was just a scandal, after all.

When the TalkTalk hacking story broke and I heard the company’s chief executive say in a live radio interview that she couldn’t say whether the customer data that had allegedly been stolen had been stored in encrypted form, the Runciman question sprang immediately to mind. That the boss of a communications firm should be so ignorant about something so central to her business certainly sounded like a scandal…

Read on

LATER Interesting blog post by Bruce Schneier. He opens with an account of how the CIA’s Director and the software developer Grant Blakeman had their email accounts hacked. Then,

Neither of them should have been put through this. None of us should have to worry about this.

The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.

It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.

He’s right. Only when the costs of insecurity exceed the costs of doing it right will companies invest properly in it. And governments can fix that, quickly, by changing the law. For once, this is something that’s not difficult to do, even in a democracy.

So even Apple can’t break into my iPhone?

Hmmm… I wonder. This from SiliconBeat:

Apple says it would be burdensome — and mostly impossible — for it to unlock people’s iPhones upon the request of law enforcement.

In a legal filing this week, the iPhone maker answered a question posed by U.S. Magistrate Judge James Orenstein, who had been urged by federal prosecutors to force Apple to unlock an iPhone. Orenstein said last week that he would defer ruling until Apple let him know whether it’s feasible to bypass an iPhone’s passcode.

Here’s the meat of Apple’s response, which comes amid law enforcement officials’ growing frustration over tech companies’ increased privacy and security efforts:

“In most cases now and in the future, the government’s requested order would be substantially burdensome, as it would be impossible to perform. For devices running iOS 8 or higher, Apple would not have the technical ability to do what the government requests—take possession of a password protected device from the government and extract unencrypted user data from that device for the government. Among the security features in iOS 8 is a feature that prevents anyone without the device’s passcode from accessing the device’s encrypted data. This includes Apple.”

Want to network your Jeep Cherokee? Try smoke signals: they’re safer

This morning’s Observer column:

‘‘Jeep Cherokee hacked in demo; Chrysler owners urged to download patch”, was the heading on an interesting story last week. “Just imagine,” burbled the report, “one moment you’re listening to some pleasant pop hits on the radio, and the next moment the hip-hop station is blasting at full volume – and you can’t change it back! This is just one of the exploits of Charlie Miller and Chris Valasek … when they hacked into a Jeep Cherokee. They were able to change the temperature of the air conditioning, turn on the windshield wipers and blast the wiper fluid to blur the glass, and even disable the brakes, turn off the transmission, take control of the steering, and display their faces onto the dashboard’s screen.”

In some ways, this was an old story: cars have been largely governed by electronics since the 1980s, and anyone who controls the electronics controls the car. But up to now, the electronics have not been connected to the internet. What makes the Jeep Cherokee story interesting is that its electronics were hacked via the internet. And that was possible because internet connectivity now comes as a consumer option – Uconnect – from Chrysler.

If at this point you experience a sinking feeling, then join the club. So let us return to first principles for a moment…

Read on

LATER: Chrysler has issued a recall for 1.4 million vehicles as a result of the hacking revelations.

Common sense about hacking

From the Economist blog:

FOR companies, there are two strategies for dealing with people who uncover flaws in their IT security: a right way and a wrong way. Our leader on hacking this week tells of the approach that Volkswagen took when a group of academics informed it that they had uncovered a vulnerability in a remote-car-key system: the firm slapped a court injunction on them. It is difficult to conceive of an approach more likely to be counter-productive.

United Airlines, it seems, has a far more enlightened attitude. It has just awarded two hackers 1m air miles each after they managed to spot security weak spots in its website. The move is part of a scheme called “bug bounty”, in which hackers are incentivised to contact the company with security flaws, rather than post them online. This approach is common at Silicon Valley firms, and makes just as much sense for old-fashioned industries too. Pound to a penny, there are nefarious types out there trying to break into most big companies’ IT systems. Encouraging “white-hat” hackers to uncover flaws, and then rewarding them for not revealing them to the wider world, may sit uncomfortably with people’s sense of fairness. However, if it gives firms time to fix the problem, in pragmatic terms the benefit is obvious.

Yep.