Regulating the tech giants

This from Benedict Evan’s invaluable newsletter, written in response to Chris Hughes’s long NYT OpEd arguing that Facebook should be broken up…

I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?

Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.

This is a good summary of why the regulation issue is so perplexing. Our difficulties include the fact that we don’t have an analytical framework yet for (i) analysing the kinds of power wielded by the platforms; (ii) categorising the societal harms which the tech giants might be inflicting; or (iii) understanding how our traditional toolset for dealing with corporate power (competition law, antitrust, etc.) needs to be updated for the contemporary challenges posed by the companies.

Just after I’d read the newsletter, the next item in my inbox contained a link to a Pew survey which revealed the colossal numbers of smartphone users across the world who think they are accessing the Internet when they’re actually just using Facebook or WhatsApp. Interestingly, it’s mostly those who have some experience of hooking up to the Internet via a desktop PC who know that there’s actually a real Internet out there. But if your first experience of Internet connectivity is via a smartphone running the Facebook app (which means that your data may be free), then as far as you are concerned, Facebook is the Internet.

So Facebook has, effectively, blotted out the open Internet for a large segment of humanity. That’s also a new kind of power for which we don’t have — at the moment — a category. Just as the so-called Right to be Forgotten* recognises that Google has the power to render someone invisible. After all, in a networked world, if the dominant search engine doesn’t find you, then effectively you have ceased to exist.


  • It’s not a right to be forgotten, merely a right not to be found by Google’s search engine. The complained-of information remains on the website where it was originally published.

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on

Ben Evans on the DCMS White Paper on Online Harms

From Ben’s weekly newsletter:

The Uk government has released a ‘White Paper’ (consultation prior to legislation) covering the management and take-down of harmful content on social platforms. The idea is to have a list of specific and clearly defined kind of harmful content (child exploitation, promoting terrorism, etc), an obligation on anyone hosting content to have a reasonable and systematic process for finding and removing this, and a penalty regime that is proportionate to the kind of harm (child exploitation is worst), how hard they’d tried to deal with it (the ‘reasonableness’ test), and how big the company is (startups get more leeway on less harmful stuff), with a regulatory body to manage and adjudicate this. The UK attitude is “this is how everything else is regulated, so why should online be any different?” The broader point: FB and Google etc are not in China, but more and more economies where they are present and have to remain will start passing laws, and some of them will mean their global operations might have to change – there will be a lowest common denominator effect. This one tries not to be too prescriptive and tries not to harm startups, but GDPR was the opposite. And, of course, absolutely no-one in the UK (or anywhere else) cares what American lawyers think the American constitution says.

Finally, a government takes on the tech companies

This morning’s Observer column:

On Monday last week, the government published its long-awaited white paper on online harms. It was launched at the British Library by the two cabinet ministers responsible for it – Jeremy Wright of the Department for Digital, Culture, Media and Sport (DCMS) and the home secretary, Sajid Javid. Wright was calm, modest and workmanlike in his introduction. Javid was, well, more macho. The social media companies had had their chances to put their houses in order. “They failed,” he declared. “I won’t let them fail again.” One couldn’t help feeling that he had one eye on the forthcoming hustings for the Tory leadership.

Nevertheless, this white paper is a significant document…

Read on

The dark side of recommendation engines

This morning’s Observer column:

My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.

Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.

Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…

Read on

Facebook’s targeting engine: still running smoothly on all cylinders

Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

Note also, that the formulaic Facebook response hasn’t changed either:

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Ah, yes. That ‘broader look’ again.

Facebook: the regulatory noose tightens

This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.

Couldn’t happen to nastier people.

In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.

Think that self-driving cars will eliminate traffic? Think again

Fascinating paper, “The autonomous vehicle parking problem” by Adam Millard-Ball. In it he: identifies and analyzes parking behavior of autonomous vehicles; uses a traffic simulation model to demonstrate how autonomous vehicles can implicitly coordinate to reduce the cost of cruising for parking, through self-generated congestion; discusses policy responses, including congestion pricing; and argues that congestion pricing should include both a time-based charge for occupying the public right-of-way, and a distance- or energy-based charge to internalizes other externalities.

The Abstract reads:

Autonomous vehicles (AVs) have no need to park close to their destination, or even to park at all. Instead, AVs can seek out free on-street parking, return home, or cruise (circle around). Because cruising is less costly at lower speeds, a game theoretic framework shows that AVs also have the incentive to implicitly coordinate with each other in order to generate congestion. Using a traffic microsimulation model and data from downtown San Francisco, this paper suggests that AVs could more than double vehicle travel to, from and within dense, urban cores. New vehicle trips are generated by a 90% reduction in effective parking costs, while existing trips become longer because of driving to more distant parking spaces and cruising. One potential policy response—subsidized peripheral parking—would likely exacerbate congestion through further reducing the cost of driving. Instead, this paper argues that the rise of AVs provides the opportunity and the imperative to implement congestion pricing in urban centers. Because the ability of AVs to cruise blurs the boundary between parking and travel, congestion pricing programs should include two complementary prices—a time-based charge for occupying the public right-of-way, whether parked or in motion, and a distance- or energy-based charge that internalizes other externalities from driving.

What this suggests is that society — in this case city authorities — should think of urban streets as analogous to radio spectrum. We auction rights to communications companies to operate on specific chinks of the radio spectrum. When autonomous vehicles arrive then those who operate them ought to be treated like radio spectrum users. The one tweak we’d need is that AV operators would be charged not only for the right to use a particular slice of the road ‘spectrum’ but also for the amount of use they make of it.

Microsoft President: It’s time to regulate face-recognition technology

Interesting post by Brad Smith on the company’s Issues blog:

In July, we shared our views about the need for government regulation and responsible industry measures to address advancing facial recognition technology. As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law…

Coincidentally, the New Yorker has an interesting essay — “Should we be worried about computerized facial recognition?”

Tim Wu’s top ten antitrust targets

He writes:

If antitrust is due for a revival, just what should the antitrust law be doing? What are its most obvious targets? Compiled here (in alphabetical order) , and based on discussions with other antitrust experts, is a collection of the law’s most wanted — the firms or industries that are ripe for investigation.

Amazon
Investigation questions: Does Amazon have buying power in the employee markets in some areas of the country? Does it have market power? Is it improperly favoring its own products over marketplace competitors?

AT&T/WarnerMedia
Investigation question: In light of this, was the trial court’s approval of the AT&T and Time Warner merger clearly in error?

Big Agriculture
Over the last five years, the agricultural seed, fertilizer, and chemical industry has consolidated into four global giants: BASF, Bayer, DowDuPont, and ChemChina. According to the U.S. Department of Agriculture, seed prices have tripled since the 1990s, and since the mergers, fertilizer prices are up as well.
Investigation question: Were these mergers wrongly approved in the United States and Europe?

Big Pharma
The pharmaceutical industry has a long track record of anticompetitive and extortionary practices, including the abuse of patent rights for anticompetitive purposes and various forms of price gouging.
Investigation and legislative questions: Are there abuses of the patent system that are still ripe for investigation? Can something be done about pharmaceutical price gouging on drugs that are out of patent or, perhaps more broadly, the extortionate increases in the prices of prescription drugs?

Facebook
Having acquired competitors Instagram and WhatsApp in the 2010s in mergers that were arguably illegal, it has repeatedly increased its advertising load, incurred repeat violations of privacy laws, and failed to secure its networks against foreign manipulation while also dealing suspicious blows to competitor Snapchat. No obvious inefficiencies attend its dissolution.
Investigation questions: Should the Instagram and WhatsApp mergers be retroactively dissolved (effectively breaking up the company)? Did Facebook use its market power and control of Instagram and Instagram Stories to illegally diminish Snapchat from 2016–2018?

Google
Investigation question: Has Google anticompetitively excluded its rivals?

Ticketmaster/Live Nation
Investigation questions: Has Live Nation used its power as a promoter to protect Ticketmaster’s monopoly on sales? Was Songkick the victim of an illegal exclusion campaign? Should the Ticketmaster/Live Nation union be dissolved?

T-Mobile/Sprint
Investigation question: Would the merger between T-Mobile and Sprint likely yield higher prices and easier coordination among the three remaining firms?

U.S. Airline Industry
The U.S. airline industry is the exemplar of failed merger review.
Investigation and regulatory questions: Should one or more of the major mergers be reconsidered in light of new evidence? Alternatively, given the return to previous levels of concentration, should firmer regulation be imposed, including baggage and change-fee caps, minimum seat sizes, and other measures?

U.S. Hospitals
Legislative question: Should Congress or the states impose higher levels of scrutiny for health care and hospital mergers?
Investigation question: In light of this, was the trial court’s approval of the AT&T and Time Warner merger clearly in error?