Ethics 101 for Facebook’s geeks

”Ask yourself whether your technology persuades users to do something you wouldn’t want to be persuaded to do yourself.”

”Toward an Ethics of Persuasive Technology” By Daniel Berdichevsky and Erik Neuenschwande, Communications of the ACM, Vol. 42 No. 5, Pages 51-58 10.1145/301353.301410

Macron on AI: he gets it

Very interesting interview given by President Macron to Wired Editor Nicholas Thompson. Here’s a key excerpt:

AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

Even after discounting the presidential hubris, this is an interesting and revealing interview. Macron is probably the only major democratic leader who seems to have a grasp of this stuff. And a civilising view of it. As here:

The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values. I mean, Europe is the place where the DNA of democracy was shaped, and therefore I think Europe has to get to grips with what could become a big challenge for democracies.

And this:

At a point of time–but I think it will be a US problem, not a European problem–at a point of time, your [American – ed] government, your people, may say, “Wake up. They are too big.” Not just too big to fail, but too big to be governed. Which is brand new. So at this point, you may choose to dismantle. That’s what happened at the very beginning of the oil sector when you had these big giants. That’s a competition issue.

But second, I have a territorial issue due to the fact that they are totally digital players. They disrupt traditional economic sectors. In some ways, this might be fine because they can also provide new solutions. But we have to retrain our people. These companies will not pay for that; the government will. Today the GAFA [an acronym for Google, Apple, Facebook, and Amazon] don’t pay all the taxes they should in Europe. So they don’t contribute to dealing with negative externalities they create. And they ask the sectors they disrupt to pay, because these guys, the old sectors pay VAT, corporate taxes and so on. That’s not sustainable.

Third, people should remain sovereign when it comes to privacy rules. France and Europe have their preferences in this regard. I want to protect privacy in this way or in that way. You don’t have the same rule in the US. And speaking about US players, how can I guarantee French people that US players will respect our regulation? So at a point of time, they will have to create actual legal bodies and incorporate it in Europe, being submitted to these rules. Which means in terms of processing information, organizing themselves, and so on, they will need, indeed, a much more European or national organization. Which in turn means that we will have to redesign themselves for a much more fragmented world. And that’s for sure because accountability and democracy happen at national or regional level but not at a global scale. If I don’t walk down this path, I cannot protect French citizens and guarantee their rights. If I don’t do that, I cannot guarantee French companies they are fairly treated. Because today, when I speak about GAFA, they are very much welcome I want them to be part of my ecosystem, but they don’t play on the same level-playing field as the other players in the digital or traditional economy. And I cannot in the long run guarantee my citizens that their collective preferences or my rules can be totally implemented by these players because you don’t have the same regulation on the US side. All I know is that if I don’t, at a point of time, have this discussion and regulate them, I put myself in a situation not to be sovereign anymore.

Lots more in that vein. Well worth reading in full.

Will the GDPR make blockchains illegal in Europe?

Well, well. This is something I hadn’t anticipated:

Under the European Union’s General Data Protection Regulation, companies will be required to completely erase the personal data of any citizen who requests that they do so. For businesses that use blockchain, specifically applications with publicly available data trails such as Bitcoin and Ethereum, truly purging that information could be impossible. “Some blockchains, as currently designed, are incompatible with the GDPR,” says Michèle Finck, a lecturer in EU law at the University of Oxford. EU regulators, she says, will need to decide whether the technology must be barred from the region or reconfigure the new rules to permit an uneasy coexistence.

The ethics of working for surveillance capitalists

This morning’s Observer column:

In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of [Leo] Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.

Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?

These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront…

Read on

How Facebook thinks

Revealing leak of an internal memo by one of the company’s senior executives, sent on June 18, 2916. Here’s an excerpt:

We connect people.

That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.

So we connect more people

That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.

And still we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned.

That isn’t something we are doing for ourselves. Or for our stock price (ha!). It is literally just what we do. We connect people. Period.

That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.

The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win.

Says it all, really. Worth reading in full. Needless to say, Zuck ‘disagrees’ with it. Which brings the wonderful Mandy Rice-Davies to mind.

Will computer science have an ethical reckoning?

I came on an interesting series of tweets on Twitter by Yonatan Zunger (@yonatanzunger) triggered by reading the revelations in the Observer and New York Times about the way stolen Facebook data had been weaponised for use in the US presidential election by a rogue data-scientist and an unscrupulous company. A key element in was a wide-ranging and astonishingly frank interview given to Observer journalist Carole Cadwalladr by Chris Wylie, the programmer who wrote the code used to exploit the data. I’ve assembled Zunger’s tweetstream in chronological order:

I didn’t come up in computer science; I used to be a physicist. That transition gives me a rather specific perspective on this situation: that computer science is a field which hasn’t yet encountered consequences.

Chemistry had two reckonings, in the late 19th and early 20th centuries: first with dynamite, and then with chemical weapons. Physics had its reckoning with the Bomb. These events completely changed the fields, and the way people come up in them.

Before then, both fields were dominated by hope: the ways that science could be used to make the world a fundamentally better place. New dyes, new materials, new sources of energy, new modes of transport; everyone could see the beauty.

Afterwards, everyone became painfully, continuously aware of how things could be turned against everything they ever dreamed of.

I don’t know the stories from chemistry as well. In physics, I can tell you that everyone, from their first days as an undergrad (or often before), encounters this and wrestles with it. They talk about it in the halls or late at night, they worry about it.

They occasionally even rap about it, like @acapellascience (a physicist, btw) did. (The lyrics are worth listening to carefully)

This isn’t to say that physicists are all pacifists. The rift between Edward Teller and J. R. Oppenheimer after the war was legendary, and both of them had very real reasons to believe what they did: Teller to love the Bomb, Oppenheimer to hate it.

(For those wondering: Teller was part of that generation of Central Europeans who saw exactly how bad things could get in so much detail. They were famous for their determination to make sure things were safe at all goddamned costs.)

They were infamously not messing around, even though they took a wide range of approaches to it; consider that Edward Teller, John von Neumann, Henry Kissinger, and George Soros were all part of that.)

For a long time, it frightened me that biology hadn’t yet had this moment of reckoning — that there hadn’t yet been an incident which seared the importance of ethics and consequences into the hearts of every young scientist. Today, it frightens me more about computer scientists.

Young engineers treat ethics as a speciality, something you don’t really need to worry about; you just need to learn to code, change the world, disrupt something. They’re like kids in a toy shop full of loaded AK-47’s.

The hard lesson which other fields had to learn was this: you can never ignore that for a minute. You can never stop thinking about the uses your work might be put to, the consequences which might follow, because the worst case is so much worse than you can imagine.

Even what Chris Wylie did is only the beginning. You hand authoritarian regimes access to modern data science, and what happens? You create the tools of a real panopticon, and what happens?

Those of you in CS right now: if you don’t know if what I’m saying makes sense, pick up Richard Rhodes’ “The Making of the Atomic Bomb.” It’s an amazingly good book in its own right, and you’ll get to know both the people and what happened.

Think about this problem like SRE’s, like safety engineers. Scope your failure modes out to things involving bad actors using the systems you’re building. Come up with your disaster response exercises.

If you can do it without wanting to hide under a table, you’re not thinking hard enough. There are worse failure modes, and they’re coming for you. And you will be on deck to try to mitigate them. //

Short postscript: As several people have pointed out, many fields of biology have had these reckonings (thanks to eugenics and the like), and civil engineering did as well, with things like bridge collapses in the late 19th century.

LATER Zunger wrote all this more elegantly in the Boston Globe

Quote of the day

“Institutions take credit for producing people they have not managed to suppress.”

Leslie Stephen (father of Virginia Woolf and first Editor of the Dictionary of National Biography)

What can be done about the downsides of the app economy?

Snippet from an interesting interview with Daphne Keller, Director of Intermediary Liability at the Stanford Center for Internet and Society:

So how did Facebook user data get to Cambridge Analytica (CA)?

What happened here was a breach of the developer’s agreement with FB — not some kind of security breach or hacking. GSR did more with the data than the TOS permitted—both in terms of keeping it around and in terms of sharing it with CA. We have no way of knowing whether other developers did the same thing. FB presumably doesn’t know either, but they do (per reporting) have audit rights in their developer agreements, so they, more than anyone, could have identified the problem sooner. And the overall privacy design of FB apps has been an open invitation for developments like this from the beginning. This is a story about an ecosystem full of privacy risk, and the inevitable abuse that resulted. It’s not about a security breach.

Is this a widespread problem among app developers?

Before we rush to easy answers, there is a big picture here that will take a long time to sort through. The whole app economy, including Android and iPhone apps, depends on data sharing. That’s what makes many apps work—from constellation mapping apps that use your location, to chat apps that need your friends’ contact information. Ideally app developers will collect only the data they actually need—they should not get a data firehose. Platforms should have policies to this effect and should give users granular controls over data sharing.

User control is important in part because platform control can have real downsides. Different platforms take more or less aggressive stances in controlling apps. The more controlling a platform is, the more it acts as a chokepoint, preventing users from finding or using particular apps. That has competitive consequences (what if Android’s store didn’t offer non-Google maps apps?). It also has consequences for information access and censorship, as we have seen with Apple removing the NYT app and VPN apps from the app store in China.

For my personal policy preferences, and probably for most people’s, we would have wanted FB to be much more controlling, in terms of denying access to these broad swathes of information. At the same time, the rule can’t be that platforms can’t support apps or share data unless the platform takes full legal responsibility for what the app does. Then we’d have few apps, and incumbent powerful platforms would hold even more power. So, there is a long-complicated policy discussion to be had here. It’s frustrating that we didn’t start it years ago when these apps launched, but hopefully at least we will have it now.

To understand Trump, read Plato’s ‘Republic’

It’s as clear as day that Trump is getting ready to fire Robert Mueller, the Special Prosecutor who is remorselessly closing in on him. Liberals who gleefully believe that this will be the same kind of disastrous mistake that Richard Nixon made when he fired Archibald Cox are in for sharp disappointment. Nixon’s action made impeachment a certainty (which is why he resigned before he was drummed from office by an irate Congress). But the current Republican-controlled Congress will do nothing when Mueller is fired. Which means that Trump has to do it before the mid-term elections in November.

Trump’s critics see the remarkable level of personnel ‘churn’ in the White House as evidence of the president’s dysfunctional impulsiveness and narcissism which make it impossible for him to get anything done. I don’t share that comforting thought. There is, I’m afraid, method in his madness.

Andrew Sullivan, the veteran observer of these things, thinks so too. Since Trump doesn’t read (indeed may have difficulty reading), he obviously hasn’t read Plato’s Republic, but Sullivan has, and he sees it as an operating manual for Trumpism because it describes “how a late-stage democracy, dripping with decadence and corruption, with elites dedicated primarily to enriching themselves, and a people well past any kind of civic virtue, morphs so easily into tyranny.”

When Plato’s tyrant first comes to power — on a wave of populist hatred of the existing elites — there is a period of relative calm when he just gives away stuff: at first he promises much “in private and public, and grant[s] freedom from debts and distribute[s] land to the people and those around himself” (or, say, a trillion-dollar unfunded tax cut). He aims to please. But then, as he accustoms himself to power, and feels more comfortable, “he suspects certain men of having free thoughts and not putting up with his ruling … Some of those who helped in setting him up and are in power — the manliest among them — speak frankly to him and to one another, criticizing what is happening … Then the tyrant must gradually do away with all of them, if he’s going to rule, until he has left neither friend nor enemy of any worth whatsoever.”

This is the second phase of tyranny, after the more benign settling-in: the purge. Any constraints that had been in place to moderate the tyrant’s whims are set aside; no advice that counters his own gut impulses can be tolerated. And so, over the last couple of weeks, we have seen the president fire Rex Tillerson and Andrew McCabe, two individuals who simply couldn’t capitulate to the demand that they obey only Trump, rather than the country as well.

And because of this small gesture of defiance, they deserved especial public humiliation. Tillerson was warned of his impending doom while on the toilet — a nice, sadistic touch. McCabe was fired hours before his retirement, a public execution also fraught with venom. What kind of man is this? We have become numb to it, but we should never forget how our president is a man who revels in his own cruelty. Revenge is not a dish best served cold for him. It’s the reddest and rawest of meats.

On this reading, the firing of Tillerson and his replacement by Pompeo (“by a fawning toady, Mike Pompeo, a man whose hatred of Islam is only matched by his sympathy for waterboarders”) makes perfect sense. So too does the fact that Pompeo has been replaced in turn by Gina Happel (“a war criminal, who authorized brutal torture and illegally destroyed the evidence”).

And then there’s the replacement of H.R. McMaster by John Bolton, a nutter who has never seen a war that he didn’t like. And this too, says Sullivan, follows Plato’s playbook:

And this, of course, is also part of the second phase for Plato’s tyrant: war. “As his first step, he is always setting some war in motion, so that people will be in need of a leader,” Plato explains. In fact, “it’s necessary for a tyrant always to be stirring up war.”

This is simultaneously scary and persuasive because it suggests that we are indeed heading for war — first with North Korea, and secondly with Iran. Those mid-Term elections have never been more important.