And the USA’s greatest cybersecurity vulnerability is… its President

This morning’s Observer column:

My favourite image of the week was a picture of the Queen opening the National Cyber Security Centre in London. Her Majesty is looking bemusedly at a large display while a member of staff explains how hackers could target the nation’s electricity supply. The job of the centre’s director, Ciaran Martin, is to protect the nation from such dangers. It’s a heavy responsibility, but at least he doesn’t have to worry that his head of state is a cybersecurity liability.

His counterpart in the United States does not have that luxury…

Read on

So the government is serious about cybersecurity? Really?

This morning’s Observer column:

On Tuesday, the chancellor, Philip Hammond, announced that the government was “investing” £1.9bn in boosting the nation’s cybersecurity. “If we want Britain to be the best place in the world to be a tech business,” he said, “then it is also crucial that Britain is a safe place to do digital business… Just as technology presents huge opportunities for our economy – so to it poses a risk. Trust in the internet and the infrastructure on which it relies is fundamental to our economic future. Because without that trust, faith in the whole digital edifice will fall away.”

Quite so; cybersecurity is clearly important. After all, in its 2015 strategic defence and security review, the government classified “cyber” as a “tier 1” threat. That’s the same level as international military conflict and terrorism. So let’s look at the numbers. The UK’s defence budget currently runs at £35.1bn, while the country’s expenditure on counterterrorism is now running at about £3bn a year. That puts Hammond’s £1.9bn (a commitment he inherited from George Osborne, by the way) into perspective. And the money is to be spent over five years, so an uncharitable reading of the chancellor’s announcement is that the government is actually investing just under £400m annually in combating this tier 1 threat.

All of which suggests that there’s a yawning chasm between Hammond’s stirring rhetoric about the cyber threat and his ability to muster the resources needed to combat it…

Read on

Brought down by a toaster?

As readers of my stuff will know (see here and here, for example), I’ve been going on about the existential risk pose by the ‘internet of things’ for a while, so I’m loath to keep on about it. But this nice encapsulation of the problem by Ben Evans seems well worth quoting:

A chunk of the internet went down this week, effectively, because someone did a massive distributed denial-of-service attack using a botnet of millions of hacked IoT devices – mostly, it seems, IP webcams from one Chinese company that don’t have decent security. This is an interesting structural problem – the devices once sold are either impossible or unlikely to be patched, the users probably don’t even know that their device is hacked, and the manufacturer has no motivation and probably few of the necessary skills to do anything about it. A network designed to withstand nuclear attack, brought down by toasters. More interesting/worrying – who is doing this, why, and what will they do next?

How your shower could participate in a DDOS attack

This morning’s Observer column:

My eye was caught by a Kickstarter campaign for a gizmo called a SWON, described as “a connected conservation device for your shower”. You unscrew the shower head, screw on the SWON and then screw the head back on to it. From then on, water goes through the SWON before it reaches you. The Kickstarter campaign needs $50,000 to be pledged before the product can be made. Last time I checked, it had 75 backers and had raised pledges of $4,798.

Before consigning it to the “leading-edge uselessness” bin, I clicked on the link…

Read on

The Internet of Insecure Things is up and running

This morning’s Observer column:

Brian Krebs is one of the unsung heroes of tech journalism. He’s a former reporter for the Washington Post who decided to focus on cybercrime after his home network was hijacked by Chinese hackers in 2001. Since then, he has become one of the world’s foremost investigators of online crime. In the process, he has become an expert on the activities of the cybercrime groups that operate in eastern Europe and which have stolen millions of dollars from small- to medium-size businesses through online banking fraud. His reporting has identified the crooks behind specific scams and even led to the arrest of some of them.

Krebs runs a blog – Krebs on Security – which is a must-read for anyone interested in these matters. Sometimes, one fears for his safety, because he must have accumulated so many enemies in the dark underbelly of the net. And last Tuesday one of them struck back.

The attack began at 8pm US eastern time, when his site was suddenly hit by a distributed denial of service (DDoS) attack…

Read on

Collateral damage and the NSA’s stash of cyberweapons

This morning’s Observer column:

All software has bugs and all networked systems have security holes in them. If you wanted to build a model of our online world out of cheese, you’d need emmental to make it realistic. These holes (vulnerabilities) are constantly being discovered and patched, but the process by which this happens is, inevitably, reactive. Someone discovers a vulnerability, reports it either to the software company that wrote the code or to US-CERT, the United States Computer Emergency Readiness Team. A fix for the vulnerability is then devised and a “patch” is issued by computer security companies such as Kaspersky and/or by software and computer companies. At the receiving end, it is hoped that computer users and network administrators will then install the patch. Some do, but many don’t, alas.

It’s a lousy system, but it’s the only one we’ve got. It has two obvious flaws. The first is that the response always lags behind the threat by days, weeks or months, during which the malicious software that exploits the vulnerability is doing its ghastly work. The second is that it is completely dependent on people reporting the vulnerabilities that they have discovered.

Zero-day vulnerabilities are the unreported ones…

Read on

Foreign interference in voting systems is a national security issue

Good WashPo OpEd piece by Bruce Schneier on the implications of (i) Russian hacking of the DNC computer systems and (ii) the revelations about the insecurity if US voting machines:

Over the years, more and more states have moved to electronic voting machines and have flirted with Internet voting. These systems are insecure and vulnerable to attack.

But while computer security experts like me have sounded the alarm for many years, states have largely ignored the threat, and the machine manufacturers have thrown up enough obfuscating babble that election officials are largely mollified.

We no longer have time for that. We must ignore the machine manufacturers’ spurious claims of security, create tiger teams to test the machines’ and systems’ resistance to attack, drastically increase their cyber-defenses and take them offline if we can’t guarantee their security online.

Longer term, we need to return to election systems that are secure from manipulation. This means voting machines with voter-verified paper audit trails, and no Internet voting. I know it’s slower and less convenient to stick to the old-fashioned way, but the security risks are simply too great.

Apple vs. FBI ought to have gone to the Supreme Court

Today’s Observer column:

So the FBI sought a court order to compel Apple to write a special version of the operating system without this ingenious destructive mechanism – which could then be downloaded to the phone. Apple refused, on various grounds both technological and legalistic, and the stage was set – so some of us thought – for a legal battle that would go all the way to the supreme court.

In the end, it didn’t happen. The FBI bought a hack from an Israeli security company which had already found a way round the problem, called off the legal suit, and nobody got their day in front of the supremes. Which was a pity, because it means that a really important question posed by digital technology remains unresolved. Put simply, it’s this: what limits, if any, should be placed on the power of encryption technology to render citizens’ communications invisible to law enforcement and security authorities?

Read on

The significance of WhatsApp encryption

This morning’s Observer column:

In some ways, the biggest news of the week was not the Panama papers but the announcement that WhatsApp was rolling out end-to-end encryption for all its 1bn users. “From now on,” it said, “when you and your contacts use the latest version of the app, every call you make, and every message, photo, video, file and voice message you send, is end-to-end encrypted by default, including group chats.”

This is a big deal because it lifts encryption out of the for-geeks-only category and into the mainstream. Most people who use WhatsApp wouldn’t know a hash function if it bit them on the leg. Although strong encryption has been available to the public ever since Phil Zimmermann wrote and released PGP (Pretty Good Privacy) in 1991, it never realised its potential because the technicalities of setting it up for personal use defeated most lay users.

So the most significant thing about WhatsApp’s innovation is the way it renders invisible all the geekery necessary to set up and maintain end-to-end encryption…

Read on

Why the Apple vs. the FBI case is important

This morning’s Observer column:

No problem, thought the Feds: we’ll just get a court order forcing Apple to write a special version of the operating system that will bypass this security provision and then download it to Farook’s phone. They got the order, but Apple refused point-blank to comply – on several grounds: since computer code is speech, the order violated the first amendment because it would be “compelled speech”; because being obliged to write the code amounted to “forced labour”, it would also violate the fifth amendment; and it was too dangerous because it would create a backdoor that could be exploited by hackers and nation states and potentially put a billion users of Apple devices at risk.

The resulting public furore offers a vivid illustration of how attempting a reasoned public debate about encryption is like trying to discuss philosophy using smoke signals. Leaving aside the purely clueless contributions from clowns like Piers Morgan and Donald Trump, and the sanctimonious platitudes from Obama downwards about “no company being above the law”, there is an alarmingly widespread failure to appreciate what is at stake here. We are building a world that is becoming totally dependent on network technology. Since there is no possibility of total security in such a world, then we have to use any tool that offers at least some measure of protection, for both individual citizens and institutions. In that context, strong encryption along the lines of the stuff that Apple and some other companies are building into their products and services is the only game in town.

Read on