Facebook is just the tip of the iceberg

This morning’s Observer column:

If a picture is worth a thousand words, then a good metaphor must be worth a million. In an insightful blog post published on 23 March, Doc Searls, one of the elder statesman of the web, managed to get both for the price of one. His post was headed by one of those illustrations of an iceberg showing that only the tip is the visible part, while the great bulk of the object lies underwater. In this case, the tip was adorned with the Facebook logo while the submerged mass represented “Every other website making money from tracking-based advertising”. The moral: “Facebook’s Cambridge Analytica problems are nothing compared to what’s coming for all of online publishing.”

The proximate cause of Searls’s essay was encountering a New York Times op-ed piece entitled Facebook’s Surveillance Machine by Zeynep Tufekci. It wasn’t the (unexceptional) content of the article that interested Searls, however, but what his ad-blocking software told him about the Times page in which the essay appeared. The software had detected no fewer than 13 hidden trackers on the page. (I’ve just checked and my Ghostery plug-in has detected 19.)

Read on

Why Zuckerberg is safe

From Nils Pratley in the Guardian:

Facebook’s board has heard the calls for the appointment of an independent chair, from New York City’s pension fund for example, and decided to ignore them.

In doing so, the board seems to have accepted Zuckerberg’s bizarrely loose version of accountability. Allowing the data of up to 87 million people to be “inappropriately shared” with Cambridge Analytica was “my responsibility”, he said in answer to a later question. It was also a “huge mistake” not to focus on abuse of data more generally. But, hey, “life is about learning from the mistakes and figuring out what you need to do to move forward”.

This breezy I-promise-to-do-better mantra would be understandable if offered by a school child who had fluffed an exam. But Zuckerberg is running the world’s eighth largest company and $50bn has just been removed from its stock market value in a scandal that, aside from raising deep questions about personal privacy and social media’s influence on democracy, may provoke a regulatory backlash.

In these circumstances, why wouldn’t a board ask whether it has the right governance structure? The motivation would be self-interest. First, there is a need to ensure that the company isn’t run entirely at the whim of a chief executive who is plainly a technological whizz but admits he failed to grasp Facebook’s responsibilities as the number of users exploded to 2 billion. Second, outsiders, including users, advertisers and politicians, want reassurance that Facebook has basic checks and balances in its boardroom.

The lack of interest in governance reform is explained, of course, by the fact that Zuckerberg has a stranglehold over Facebook’s voting shares. His economic interest is 16% but he has 60% of the votes and thus, for practical purposes, can’t easily be shifted from either of his roles…

QED.

This is the flip side of the determination of some tech founders to insulate themselves from the quarterly whims of Wall Street. The Google boys have the same arrangement. Given the malign short-termism of Wall St and the doctrine of maximising shareholder value, this might have seemed sensible or even enlightened at one time. Now it looks like bad corporate governance.

Trump vs Amazon

From The National Review

Even the rich are underdogs against the government.

Libertarians can sometimes sound like Chicken Little screaming that the sky is falling whenever the government does anything. But President Donald Trump’s battle against Amazon CEO Jeff Bezos shows that they have a point.

Trump is furious with the Washington Post, which is owned by Bezos and has been highly critical of the president, and he wants to punish Bezos by going after Amazon. Trump is reportedly considering raising Amazon’s shipping costs through the U.S. Postal Service, canceling a pending Amazon contract with the Pentagon, pushing red states to investigate Amazon, and generally using antitrust and tax policy to punish Amazon.

The balance of power between Trump and Bezos shows that even if you’re skeptical of libertarians, you shouldn’t dismiss them altogether: Government power can be very dangerous.

Some point out that Jeff Bezos is the richest person in the world. But Bezos’s wealth does not exist in a vacuum; it exists because the government respects his property rights. It’s more relevant to compare Bezos’s power with that of the U.S. government, which Trump has at his disposal. No matter how rich he is, Bezos will always be the underdog.

Other countries show that rich people are no match for the government. In Russia, Vladimir Putin effectively eliminated the oligarchs who did not support him — seizing their wealth and driving them into hiding. In the process, he took control of the economy and the media through oligarchs who supported him. In China, president Xi Jinping used an anti-corruption campaign to drive out his enemies, which cleared the way to his becoming president for life. Just recently, Saudi Arabia’s crown prince Mohammed Bin Salman imprisoned rich people in a hotel and reportedly had some of them tortured until he could extract their loyalty or their wealth. In all three countries, the story is the same: The only people who are rich and powerful are the people whom the government allows to be rich and powerful.

In America, everyone is supposed to be equal before the law. That is why Trump’s grudge against Bezos is so dangerous. When the government goes after political opponents, it undermines the rule of law.

Not that Trump is much interested in the rule of law.

“The business model of the Internet is surveillance” contd.

This useful graphic comes from a wonderful post by the redoubtable Doc Searls about the ultimate unsustainability of the business model currently dominating the Web. He starts with a quote from “Facebook’s Surveillance Machine” — a NYT OpEd column by the equally-redoubtable Zeynep Tufecki:

“Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Doc then points out the irony of his Privacy Badger software detecting 13 hidden trackers on the NYT page on which Zeynep’s column appears. (I’ve just checked and Ghostery currently detects 19 trackers on it.)

The point, Doc goes on to say, is that the Times is just doing what every other publication that lives off adtech does: tracking-based advertising. “These publications”,

don’t just open the kimonos of their readers. They bring people’s bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of returning “interest-based” advertising to those same people.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach or experience), and damn little care or control by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data? No one entity, that’s for sure.

Doc points out that on reputable outfits like the New York Times writers like Zeynep have nothing to do with this endemic tracking. In such publications there probably is a functioning “Chinese Wall” between editorial and advertising. Just to drive the point home he looks at Sue Halpern’s piece in the sainted New Yorker on “Cambridge Analytica, Facebook and the Revelations of Open Secrets” and his RedMorph software finds 16 third-party trackers. (On my browser, Ghostery found 18.) The moral is, in a way, obvious: it’s a confirmation of Bruce Schneier’s original observation that “surveillance is the business model of the Internet”. Being a pedant, I would have said “of the Web”, but since many people can’t distinguish between the two, we’ll leave Bruce’s formulation stand.

Ethics 101 for Facebook’s geeks

”Ask yourself whether your technology persuades users to do something you wouldn’t want to be persuaded to do yourself.”

”Toward an Ethics of Persuasive Technology” By Daniel Berdichevsky and Erik Neuenschwande, Communications of the ACM, Vol. 42 No. 5, Pages 51-58 10.1145/301353.301410

Macron on AI: he gets it

Very interesting interview given by President Macron to Wired Editor Nicholas Thompson. Here’s a key excerpt:

AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare: you can totally transform medical care making it much more predictive and personalized if you get access to a lot of data. We will open our data in France. I made this decision and announced it this afternoon. But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s Box, with potential use cases that will not be increasing the common good and improving the way to treat you. In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution.

When you look at artificial intelligence today, the two leaders are the US and China. In the US, it is entirely driven by the private sector, large corporations, and some startups dealing with them. All the choices they will make are private choices that deal with collective values. That’s exactly the problem you have with Facebook and Cambridge Analytica or autonomous driving. On the other side, Chinese players collect a lot of data driven by a government whose principles and values are not ours. And Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale.

Even after discounting the presidential hubris, this is an interesting and revealing interview. Macron is probably the only major democratic leader who seems to have a grasp of this stuff. And a civilising view of it. As here:

The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values. I mean, Europe is the place where the DNA of democracy was shaped, and therefore I think Europe has to get to grips with what could become a big challenge for democracies.

And this:

At a point of time–but I think it will be a US problem, not a European problem–at a point of time, your [American – ed] government, your people, may say, “Wake up. They are too big.” Not just too big to fail, but too big to be governed. Which is brand new. So at this point, you may choose to dismantle. That’s what happened at the very beginning of the oil sector when you had these big giants. That’s a competition issue.

But second, I have a territorial issue due to the fact that they are totally digital players. They disrupt traditional economic sectors. In some ways, this might be fine because they can also provide new solutions. But we have to retrain our people. These companies will not pay for that; the government will. Today the GAFA [an acronym for Google, Apple, Facebook, and Amazon] don’t pay all the taxes they should in Europe. So they don’t contribute to dealing with negative externalities they create. And they ask the sectors they disrupt to pay, because these guys, the old sectors pay VAT, corporate taxes and so on. That’s not sustainable.

Third, people should remain sovereign when it comes to privacy rules. France and Europe have their preferences in this regard. I want to protect privacy in this way or in that way. You don’t have the same rule in the US. And speaking about US players, how can I guarantee French people that US players will respect our regulation? So at a point of time, they will have to create actual legal bodies and incorporate it in Europe, being submitted to these rules. Which means in terms of processing information, organizing themselves, and so on, they will need, indeed, a much more European or national organization. Which in turn means that we will have to redesign themselves for a much more fragmented world. And that’s for sure because accountability and democracy happen at national or regional level but not at a global scale. If I don’t walk down this path, I cannot protect French citizens and guarantee their rights. If I don’t do that, I cannot guarantee French companies they are fairly treated. Because today, when I speak about GAFA, they are very much welcome I want them to be part of my ecosystem, but they don’t play on the same level-playing field as the other players in the digital or traditional economy. And I cannot in the long run guarantee my citizens that their collective preferences or my rules can be totally implemented by these players because you don’t have the same regulation on the US side. All I know is that if I don’t, at a point of time, have this discussion and regulate them, I put myself in a situation not to be sovereign anymore.

Lots more in that vein. Well worth reading in full.

Will the GDPR make blockchains illegal in Europe?

Well, well. This is something I hadn’t anticipated:

Under the European Union’s General Data Protection Regulation, companies will be required to completely erase the personal data of any citizen who requests that they do so. For businesses that use blockchain, specifically applications with publicly available data trails such as Bitcoin and Ethereum, truly purging that information could be impossible. “Some blockchains, as currently designed, are incompatible with the GDPR,” says Michèle Finck, a lecturer in EU law at the University of Oxford. EU regulators, she says, will need to decide whether the technology must be barred from the region or reconfigure the new rules to permit an uneasy coexistence.

The ethics of working for surveillance capitalists

This morning’s Observer column:

In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of [Leo] Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.

Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?

These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront…

Read on