Cummings: long on ideas, short on strategy

My Observer OpEd piece about the world’s most senior technocrat:

When Dominic Cummings arrived in Downing Street, some of his new colleagues were puzzled by one of his mantras: “Get Brexit done, then Arpa”. Now, perhaps, they have some idea of what that meant. On 2 January, Cummings published on his blog the wackiest job proposals to emerge from a government since the emperor Caligula made his horse a consul. Dominic Cummings warned over civil service shake-up plan Read more

The ad took the form of a long post under the heading “We’re hiring data scientists, project managers, policy experts, assorted weirdos…”, included a reading list of arcane academic papers that applicants were expected to read and digest and declared that applications from “super-talented weirdos” would be especially welcome. They should assemble a one-page letter, attach a CV and send it to ideasfornumber10@gmail.com. (Yes, that’s @gmail.com.)

It was clear that nobody from HR was involved in composing this call for clever young things. Alerting applicants to the riskiness of employment by him, Cummings writes: “I’ll bin you within weeks if you don’t fit – don’t complain later because I made it clear now.”

The ad provoked predictable outrage and even the odd parody. The most interesting thing about it, though, is its revelations of what moves the man who is now the world’s most senior technocrat. The “Arpa” in his mantra, for example, is classic Cummings, because the Pentagon’s Advanced Research Projects Agency (now Darpa) is one of his inspirational models…

Read on

Facebook’s strategic obfuscation

Facebook’s Carolyn Everson, vice president of global marketing solutions, was interviewed by Peter Kafka at the 2019 Code Media conference in Los Angeles yesterday. Vox had a nice report of the interview. This section is particularly interesting:

When pressed on Facebook’s refusal to fact-check political ads, Everson tried to defend the company’s stance by referencing the rules that govern how broadcasters must handle political advertisements. In the US, the Federal Communications Commission has extensive guidelines for television and radio broadcasters around political advertising that bar broadcasters from censoring ads or from taking down ones that make false claims. Those guidelines don’t apply to online platforms, including Facebook, but the company has consistently tried to hide behind them.

“We have no ability, legally, to tell a political candidate that they are not allowed to run their ad,” Everson said.

That’s complete baloney. Facebook is not bound by any regulations governing TV ads. It can shut down anyone or anything it likes or dislikes.

After the interview, a Facebook spokeswoman walked back the comments and said that Everson misspoke when she said Facebook was legally barred from refusing to run political ads.

An audience member also asked Everson why Facebook has decided to allow right-wing website Breitbart to be listed in its new News tab, which is ostensibly an indication that Breitbart offers trusted news, despite being a known source of propaganda. “We’re treating them as a news source; I wouldn’t use the term ‘trusted news,’” Everson said, pointing out that Facebook will also include “far-left” publications.

Which of course raises interesting questions about Facebook’s standards for determining the “integrity” of the news sources it includes in its tab, which the company extolled when it launched the feature in October.

Kranzberg’s Law

As a critic of many of the ways that digital technology is currently being exploited by both corporations and governments, while also being a fervent believer in the positive affordances of the technology, I often find myself stuck in unproductive discussions in which I’m accused of being an incurable “pessimist”. I’m not: better descriptions of me are that I’m a recovering Utopian or a “worried optimist”.

Part of the problem is that the public discourse about this stuff tends to be Manichean: it lurches between evangelical enthusiasm and dystopian gloom. And eventually the discussion winds up with a consensus that “it all depends on how the technology is used” — which often leads to Melvin Kranzberg’s Six Laws of Technology — and particularly his First Law, which says that “Technology is neither good nor bad; nor is it neutral.” By which he meant that,

“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

Many of the current discussions revolve around various manifestations of AI, which means machine learning plus Big Data. At the moment image recognition is the topic du jour. The enthusiastic refrain usually involves citing dramatic instances of the technology’s potential for social good. A paradigmatic example is the collaboration between Google’s DeepMind subsidiary and Moorfields Eye Hospital to use machine learning to greatly improve the speed of analysis of anonymized retinal scans and automatically flag ones which warrant specialist investigation. This is a good example of how to use the technology to improve the quality and speed of an important healthcare service. For tech evangelists it is an irrefutable argument for the beneficence of the technology.

On the other hand, critics will often point to facial recognition as a powerful example for the perniciousness of machine-learning technology. One researcher has even likened it to plutonium. Criticisms tend to focus on its well-known weaknesses (false positives, racial or gender bias, for example), its hasty and ill-considered use by police forces and proprietors of shopping malls, the lack of effective legal regulation, and on its use by authoritarian or totalitarian regimes, particularly China.

Yet it is likely that even facial recognition has socially beneficial applications. One dramatic illustration is a project by an Indian child labour activist, Bhuwan Ribhu, who works for the Indian NGO Bachpan Bachao Andolan. He launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

The results were remarkable. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

This is clearly a good thing. But does it provide an overwhelming argument for India’s plan to construct one of the world’s largest facial-recognition systems with a unitary database accessible to police forces in 29 states and seven union territories?

I don’t think so. If one takes Kranzberg’s First Law seriously, then each proposed use of a powerful technology like this has to face serious scrutiny. The more important question to ask is the old Latin one: Cui Bono?. Who benefits? And who benefits the most? And who loses? What possible unintended consequences could the deployment have? (Recognising that some will, by definition, be unforseeable.) What’s the business model(s) of the corporations proposing to deploy it? And so on.

At the moment, however, all we mostly have is unasked questions, glib assurances and rash deployments.

How “Don’t Be Evil” panned out

My Observer review of Rana Foroohar’s new book about the tech giants and their implications for our world.

“Don’t be evil” was the mantra of the co-founders of Google, Sergey Brin and Larry Page, the graduate students who, in the late 1990s, had invented a groundbreaking way of searching the web. At the time, one of the things the duo believed to be evil was advertising. There’s no reason to doubt their initial sincerity on this matter, but when the slogan was included in the prospectus for their company’s flotation in 2004 one began to wonder what they were smoking. Were they really naive enough to believe that one could run a public company on a policy of ethical purity?

The problem was that purity requires a business model to support it and in 2000 the venture capitalists who had invested in Google pointed out to the boys that they didn’t have one. So they invented a model that involved harvesting users’ data to enable targeted advertising. And in the four years between that capitulation to reality and the flotation, Google’s revenues increased by nearly 3,590%. That kind of money talks.
Sign up for Bookmarks: discover new books in our weekly email
Read more

Rana Foroohar has adopted the Google mantra as the title for her masterful critique of the tech giants that now dominate our world…

Read on

Zuckerberg’s ideology

Facebook’s announcement that it will include Breitbart in its select list of ‘curated’ news sources speaks volumes. Charlie Wardle has an intelligent take on it in the New York Times:

Because Mr. Zuckerberg is one of the most powerful people in politics right now — and because the stakes feel so high — there’s a desire to assign him a political label. That’s understandable but largely beside the point. Mark Zuckerberg may very well have political beliefs. And his every action does have political consequences. But he is not a Republican or a Democrat in how he wields his power. Mr. Zuckerberg’s only real political affiliation is that he’s the chief executive of Facebook. His only consistent ideology is that connectivity is a universal good. And his only consistent goal is advancing that ideology, at nearly any cost.

Yep. The only thing he really cares about is growth in the number of users of Facebook, and the engagement they have with the platform. And the collateral damage of that is someone else’s problem. This is sociopathy on steroids.

Fines don’t work. To control tech companies we have to hit them where it really hurts

Today’s Observer comment piece

If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!

This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…

Read on

Regulating the tech giants

This from Benedict Evan’s invaluable newsletter, written in response to Chris Hughes’s long NYT OpEd arguing that Facebook should be broken up…

I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?

Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.

This is a good summary of why the regulation issue is so perplexing. Our difficulties include the fact that we don’t have an analytical framework yet for (i) analysing the kinds of power wielded by the platforms; (ii) categorising the societal harms which the tech giants might be inflicting; or (iii) understanding how our traditional toolset for dealing with corporate power (competition law, antitrust, etc.) needs to be updated for the contemporary challenges posed by the companies.

Just after I’d read the newsletter, the next item in my inbox contained a link to a Pew survey which revealed the colossal numbers of smartphone users across the world who think they are accessing the Internet when they’re actually just using Facebook or WhatsApp. Interestingly, it’s mostly those who have some experience of hooking up to the Internet via a desktop PC who know that there’s actually a real Internet out there. But if your first experience of Internet connectivity is via a smartphone running the Facebook app (which means that your data may be free), then as far as you are concerned, Facebook is the Internet.

So Facebook has, effectively, blotted out the open Internet for a large segment of humanity. That’s also a new kind of power for which we don’t have — at the moment — a category. Just as the so-called Right to be Forgotten* recognises that Google has the power to render someone invisible. After all, in a networked world, if the dominant search engine doesn’t find you, then effectively you have ceased to exist.


  • It’s not a right to be forgotten, merely a right not to be found by Google’s search engine. The complained-of information remains on the website where it was originally published.

Getting things into perspective

From Zeynep Tufecki:

We don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on

Finally, a government takes on the tech companies

This morning’s Observer column:

On Monday last week, the government published its long-awaited white paper on online harms. It was launched at the British Library by the two cabinet ministers responsible for it – Jeremy Wright of the Department for Digital, Culture, Media and Sport (DCMS) and the home secretary, Sajid Javid. Wright was calm, modest and workmanlike in his introduction. Javid was, well, more macho. The social media companies had had their chances to put their houses in order. “They failed,” he declared. “I won’t let them fail again.” One couldn’t help feeling that he had one eye on the forthcoming hustings for the Tory leadership.

Nevertheless, this white paper is a significant document…

Read on