The technical is political. Now what?

Bruce Schneier has been valiantly going on about this for a while. Once upon a time, digital technology didn’t have many social, political or democratic ramifications. Those days are over. Universities, companies, software engineers and governments need to think about this — and tool up for it. Here’s an excerpt from one of Bruce’s recent posts on the subject:

Technology now permeates society in a way it didn’t just a couple of decades ago, and governments move too slowly to take this into account. That means technologists now are relevant to all sorts of areas that they had no traditional connection to: climate change, food safety, future of work, public health, bioengineering.

More generally, technologists need to understand the policy ramifications of their work. There’s a pervasive myth in Silicon Valley that technology is politically neutral. It’s not, and I hope most people reading this today knows that. We built a world where programmers felt they had an inherent right to code the world as they saw fit. We were allowed to do this because, until recently, it didn’t matter. Now, too many issues are being decided in an unregulated capitalist environment where significant social costs are too often not taken into account.

This is where the core issues of society lie. The defining political question of the 20th century was: “What should be governed by the state, and what should be governed by the market?” This defined the difference between East and West, and the difference between political parties within countries. The defining political question of the first half of the 21st century is: “How much of our lives should be governed by technology, and under what terms?” In the last century, economists drove public policy. In this century, it will be technologists.

The future is coming faster than our current set of policy tools can deal with. The only way to fix this is to develop a new set of policy tools with the help of technologists. We need to be in all aspects of public-interest work, from informing policy to creating tools all building the future. The world needs all of our help.

Yep.

The cost of insecurity (not to mention of Windows XP)

From The Inquirer:

THE WANNACRY RANSOMWARE ATTACK cost the already cash-strapped NHS almost £100m, the Department of Health and Social Care (DHSC) estimates.

Until now, the financial damage caused by the sweeping cyber attack – which it’s now been revealed affected 8 per cent of GP clinics and forced the NHS to cancel 19,000 appointments – has been unclear, but the DHSC estimates in a new report that the total figure cost in at £92m.

WannaCry cost approximately £19 in lost output, while a whopping £73m was racked up in IT costs in the aftermath of the attack, according to the report. Some £72m was spent on restoring systems and data in the weeks after the attack struck.

“We recognise that at the time of the attack the focus would have been on patient care rather than working out what WannaCry was costing the NHS,” the report says.

Following the attack, the NHS has pledged to upgrade all of its systems to Windows 10 after it was found that the service’s outdated, and unpatched Windows XP and Windows 7 systems were largely to blame.

The great Chinese hardware hack: true or false?

This morning’s Observer column:

On 4 October, Bloomberg Businessweek published a major story under the headline “The Big Hack: How China Used a Tiny Chip to Infiltrate US Companies”. It claimed that Chinese spies had inserted a covert electronic backdoor into the hardware of computer servers used by 30 US companies, including Amazon and Apple (and possibly also servers used by national security agencies), by compromising America’s technology supply chain.

According to the Bloomberg story, the technology had been compromised during the manufacturing process in China. Undercover operatives from a unit of the People’s Liberation Army had inserted tiny chips – about the size of a grain of rice – into motherboards during the manufacturing process.

The affected hardware then made its way into high-end video-compression servers assembled by a San Jose company called Supermicro and deployed by major US companies and government agencies…

Read on

Sweeping the Net for… [take your pick]

From Ron Deibert:

The LGBTQ news website, “Gay Today,” is blocked in Bahrain; the website for Greenpeace International is blocked in the UAE; a matrimonial dating website is censored in Afghanistan; all of the World Health Organization’s website, including sub-pages about HIV/AIDS information, is blocked in Kuwait; an entire category of websites labeled “Sex Education,” are all censored in Sudan; in Yemen, an armed faction, the Houthis, orders the country’s main ISP to block regional and news websites.

What’s the common denominator linking these examples of Internet censorship? All of them were undertaken using technology provided by the Canadian company, Netsweeper, Inc.

In a new Citizen Lab report published today, entitled Planet Netsweeper, we map the global proliferation of Netsweeper’s Internet filtering technology to 30 countries. We then focus our analysis on 10 countries with significant human rights, insecurity, or public policy issues in which Netsweeper systems are deployed on large consumer ISPs: Afghanistan, Bahrain, India, Kuwait, Pakistan, Qatar, Somalia, Sudan, UAE, and Yemen. The research was done using a combination of network measurement and in-country testing methods. One method involved scanning every one of the billions of IP addresses on the Internet to search for signatures we have developed for Netsweeper installations (think of it like an x-ray of the Internet).

National-level Internet censorship is a growing norm worldwide. It is also a big business opportunity for companies like Netsweeper. Netsweeper’s Internet filtering service works by dynamically categorizing Internet content, and then providing customers with options to choose categories they wish to block (e.g., “Matrimonial” in Afghanistan and “Sex Education” in Sudan). Customers can also create their own custom lists or add websites to categories of their own choosing.

Netsweeper markets its services to a wide range of clients, from institutions like libraries to large ISPs that control national-level Internet connectivity. Our report highlights problems with the latter, and specifically the problems that arise when Internet filtering services are sold to ISPs in authoritarian regimes, or countries facing insecurity, conflict, human rights abuses, or corruption. In these cases, Netsweeper’s services can easily be abused to help facilitate draconian controls on the public sphere by stifling access to information and freedom of expression.

While there are a few categories that some might consider non-controversial—e.g., filtering of pornography and spam—there are others that definitely are not. For example, Netsweeper offers a filtering category called “Alternative Lifestyles,” in which it appears mostly legitimate LGBTQ content is targeted for convenient blocking. In our testing, we found this category was selected in the United Arab Emirates and was preventing Internet users from accessing the websites of the Gay & Lesbian Alliance Against Defamation (http://www.glaad.org) and the International Foundation for Gender Education (http://www.ifge.org), among many others. This kind of censorship, facilitated by Netsweeper technology, is part of a larger pattern of systemic discrimination, violence, and other human rights abuses against LGBTQ individuals in many parts of the world.

According to the United Nations Guiding Principles on Business and Human Rights, all companies have responsibilities to evaluate and take measures to mitigate the negative human rights impacts of their services on an ongoing basis. Despite many years of reporting and numerous questions from journalists and academics, Netsweeper still fails to take this obligation seriously.

The new normal: hardware vulnerabilities

From Bruce Schneier:

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors’ manufacturers, and patched — at least to the extent possible.

This news isn’t really any different from the usual endless stream of security vulnerabilities and patches, but it’s also a harbinger of the sorts of security problems we’re going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn’t possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren’t anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They’re the future of security — and it doesn’t look good for the defenders.

Less haste, more security

This morning’s observer column:

I ran into my favourite technophobe the other day. “I see,” he chortled, “that your tech industry (he holds me responsible for everything that is wrong with the modern world) is in meltdown!” The annoying thing is that he was partly right. What has happened is that two major security vulnerabilities – one of them has been christened “Meltdown”, the other “Spectre” – have been discovered in the Central Processing Unit (CPU) chips that power most of the computers in the world.

A CPU is a device for performing billions of apparently trivial operations in sequences determined by whatever program is running: it fetches some data from memory, performs some operations on that data and then sends it back to memory; then fetches the next bit of data; and so on. Two decades ago some wizard had an idea for speeding up CPUs…

Read on

Meltdown and Spectre summarised

Lovely economical summary by the UK ICO’s Head of Technology Policy of the two vulnerabilities currently obsessing the CPU-design industry:

In essence, the vulnerabilities provide ways that an attacker could extract information from privileged memory locations that should be inaccessible and secure. The potential attacks are only limited by what is being stored in the privileged memory locations – depending on the specific circumstances an attacker could gain access to encryption keys, passwords for any service being run on the machine, or session cookies for active sessions within a browser. One variant of the attacks could allow for an administrative user in a guest virtual machine to read the host server’s kernel memory. This could include the memory assigned to other guest virtual machines.

WannaCry? Not really

If you’re overwhelmed by all the good, bad and simply awful reporting of the WannaCry ‘ransomware’ attack, here are links to two sane and well-informed pieces.

  • Ross Anderson’s post on Light Blue Touchpaper — “Bad Malware, Worse Reporting”.
  • Ben Thomson’s long and thoughtful post on his Strachery blog — “WANNACRY ABOUT BUSINESS MODELS”.

Also…

The Economist had a useful briefing a while back on the general topic of our chronic insecurity — “Computer security is broken from top to bottom”.

And of course it goes without saying that this whole debacle provides a salutary confirmation of the foolishness of demanding that there should be ‘backdoors’ in encryption ‘for government use only’. WannaCry was turbocharged by some software written by the NSA (which knew about the Windows XP vulnerability but didn’t tell Microsoft) to exploit it. The moral: if the government knows about a vulnerability, then other people will too. And some of them will be more even more unscrupulous.