Will computer science have an ethical reckoning?

I came on an interesting series of tweets on Twitter by Yonatan Zunger (@yonatanzunger) triggered by reading the revelations in the Observer and New York Times about the way stolen Facebook data had been weaponised for use in the US presidential election by a rogue data-scientist and an unscrupulous company. A key element in was a wide-ranging and astonishingly frank interview given to Observer journalist Carole Cadwalladr by Chris Wylie, the programmer who wrote the code used to exploit the data. I’ve assembled Zunger’s tweetstream in chronological order:

I didn’t come up in computer science; I used to be a physicist. That transition gives me a rather specific perspective on this situation: that computer science is a field which hasn’t yet encountered consequences.

Chemistry had two reckonings, in the late 19th and early 20th centuries: first with dynamite, and then with chemical weapons. Physics had its reckoning with the Bomb. These events completely changed the fields, and the way people come up in them.

Before then, both fields were dominated by hope: the ways that science could be used to make the world a fundamentally better place. New dyes, new materials, new sources of energy, new modes of transport; everyone could see the beauty.

Afterwards, everyone became painfully, continuously aware of how things could be turned against everything they ever dreamed of.

I don’t know the stories from chemistry as well. In physics, I can tell you that everyone, from their first days as an undergrad (or often before), encounters this and wrestles with it. They talk about it in the halls or late at night, they worry about it.

They occasionally even rap about it, like @acapellascience (a physicist, btw) did. (The lyrics are worth listening to carefully)

This isn’t to say that physicists are all pacifists. The rift between Edward Teller and J. R. Oppenheimer after the war was legendary, and both of them had very real reasons to believe what they did: Teller to love the Bomb, Oppenheimer to hate it.

(For those wondering: Teller was part of that generation of Central Europeans who saw exactly how bad things could get in so much detail. They were famous for their determination to make sure things were safe at all goddamned costs.)

They were infamously not messing around, even though they took a wide range of approaches to it; consider that Edward Teller, John von Neumann, Henry Kissinger, and George Soros were all part of that.)

For a long time, it frightened me that biology hadn’t yet had this moment of reckoning — that there hadn’t yet been an incident which seared the importance of ethics and consequences into the hearts of every young scientist. Today, it frightens me more about computer scientists.

Young engineers treat ethics as a speciality, something you don’t really need to worry about; you just need to learn to code, change the world, disrupt something. They’re like kids in a toy shop full of loaded AK-47’s.

The hard lesson which other fields had to learn was this: you can never ignore that for a minute. You can never stop thinking about the uses your work might be put to, the consequences which might follow, because the worst case is so much worse than you can imagine.

Even what Chris Wylie did is only the beginning. You hand authoritarian regimes access to modern data science, and what happens? You create the tools of a real panopticon, and what happens?

Those of you in CS right now: if you don’t know if what I’m saying makes sense, pick up Richard Rhodes’ “The Making of the Atomic Bomb.” It’s an amazingly good book in its own right, and you’ll get to know both the people and what happened.

Think about this problem like SRE’s, like safety engineers. Scope your failure modes out to things involving bad actors using the systems you’re building. Come up with your disaster response exercises.

If you can do it without wanting to hide under a table, you’re not thinking hard enough. There are worse failure modes, and they’re coming for you. And you will be on deck to try to mitigate them. //

Short postscript: As several people have pointed out, many fields of biology have had these reckonings (thanks to eugenics and the like), and civil engineering did as well, with things like bridge collapses in the late 19th century.

LATER Zunger wrote all this more elegantly in the Boston Globe

Why you can’t believe what you see (or hear)

This morning’s Observer column:

When John F Kennedy was assassinated in Dallas on 22 November 1963, he was on his way to deliver a speech to the assembled worthies of the city. A copy of his script for the ill-fated oration was later presented by Lyndon Johnson to Stanley Marcus, head of the department store chain Neiman Marcus, whose daughter was in the expectant audience that day.

The text has long been available on the internet and it makes for poignant reading, not just because of what happened at Dealey Plaza that day, but because large chunks of it look eerily prescient in the age of Trump. JFK was a terrific public speaker who employed superb speechwriters (especially Theodore Sorensen). His speeches were invariably elegant and memorable: he had a great eye for a good phrase, and his delivery was usually faultless. So his audience in Dallas knew that they were in for a treat – until Lee Harvey Oswald terminated the dream.

Last week, 55 years on, we finally got to hear what Kennedy’s audience might have heard…

Read on

Interesting uses for blockchain technology #15610

From a lovely rant by Paul Ford:

The blockchain can be a form of media. The writer Maria Bustillos is starting a magazine that will publish on the blockchain — which means it will be impossible to take down. (Disclosure: In theory, I’ll write for Maria, who’s a friend, and she’ll pay me in cryptocurrency, or what she calls “space jewels.”) One of her aims is to make it impossible for people—Peter Thiel, for example, who backed Hulk Hogan’s lawsuit against Gawker—to threaten publications they dislike.

You could even make a distributed magazine called Information of Vital Public Interest About Peter Thiel that would be awfully hard to sue into oblivion. It’s the marketplace of ideas. Literally. Try another thought experiment. Remember that anonymously created list of men who worked in media and who were alleged sexual harassers? You could, by whispering the allegations from one wallet to the next, put that information on a blockchain. You could make a web browser plug-in so that whenever someone visited a sexual harasser’s LinkedIn page, that page could glow bright red. You could have a distributed, immutable record of sexual harassment allegations on the internet. (Is there an economy around such allegations? Well, people do pay for gossip. GossipCoin?)

Fixing the future?

My Observer review of Andrew Keen’s How to Fix the Future: Staying Human in the Digital Age:

Many years ago the cultural critic Neil Postman predicted that the future of humanity lay somewhere in the area between the dystopian nightmares of two English writers – George Orwell and Aldous Huxley. Orwell believed that we would be destroyed by the things we fear – surveillance and thought-control; Huxley thought that our undoing would be the things that delight us – that our rulers would twig that entertainment is more efficient than coercion as a means of social control.

Then we invented the internet, a technology that – it turned out – gave us both nightmares at once: comprehensive surveillance by states and corporations on the one hand; and, on the other, a strange kind of passive addiction to devices, apps and services which, like the drug soma in Huxley’s Brave New World, possess “all the advantages of Christianity and alcohol and none of their defects”.

The great irony, of course, is that not all of this was inevitable…

Read on

Managing the future that’s already here

This morning’s Observer column:

As the science fiction novelist William Gibson famously observed: “The future is already here – it’s just not very evenly distributed.” I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat (or promise, depending on your point of view) of “superintelligent” machines, ie ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them. The likelihood (or mirage) of such machines still remains a distant prospect, a point made by the leading AI researcher Andrew Ng, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.

That seems about right to me…

Read on

Regulating the cloud

This morning’s Observer column:

Cloud computing is just a metaphor. It has its origins in the way network engineers in the late-1970s used to represent the internet as an amorphous entity when they were discussing what was happening with computers at a local level. They just drew the net as a cartoonish cloud to represent a fuzzy space in which certain kinds of taken-for-granted communication activities happened. But since clouds are wispy, insubstantial things that some people love, the fact that what went on in the computing cloud actually involved inscrutable, environmentally destructive and definitely non-fuzzy server farms owned by huge corporations led to suspicions that the metaphor was actually a cosy euphemism, formulated to obscure a more sinister reality…

Read on

Developer’s Remorse kicks in

At last! The Center for Humane Technology has launched.

From the NYT report:

Its first project to reform the industry will be to introduce a Ledger of Harms — a website aimed at guiding rank-and-file engineers who are concerned about what they are being asked to build. The site will include data on the health effects of different technologies and ways to make products that are healthier.

Jim Steyer, chief executive and founder of Common Sense, said the Truth About Tech campaign was modeled on antismoking drives and focused on children because of their vulnerability. That may sway tech chief executives to change, he said. Already, Apple’s chief executive, Timothy D. Cook, told The Guardian last month that he would not let his nephew on social media, while the Facebook investor Sean Parker also recently said of the social network that “God only knows what it’s doing to our children’s brains.”

Mr. Steyer said, “You see a degree of hypocrisy with all these guys in Silicon Valley.”

The new group also plans to begin lobbying for laws to curtail the power of big tech companies. It will initially focus on two pieces of legislation: a bill being introduced by Senator Edward J. Markey, Democrat of Massachusetts, that would commission research on technology’s impact on children’s health, and a bill in California by State Senator Bob Hertzberg, a Democrat, which would prohibit the use of digital bots without identification.

Bitcoin’s silver lining?

This morning’s Observer column:

The downside of the media feeding frenzy around bitcoin is the way it obscures the fact that the technology underpinning it, the blockchain, or the public distributed ledger – a database securely recording financial, physical or electronic assets for sharing across a network through transparent updates of information – is potentially very important. This is because it may have more useful applications than supporting speculative bubbles or money laundering. In 2016, for example, Mark Walport, the government’s chief scientific adviser issued a report, arguing that the technology “could transform the delivery of public services and boost productivity”.

Which indeed it could, but that would be small beer if the messages I’m picking up from across the tech world are accurate. For the real significance of blockchain technology might be its capacity to retool the internet itself to make it secure enough for modern use and return it to its decentralised essence, in the process possibly liberating it from the tech companies that currently have a stranglehold on it…

Read on