Microsoft 2.0

One of the most remarkable aspects of the present is the way one tech giant has become a reformed character. Microsoft — the rapacious, bullying monster of Bill Gates’s heyday — has morphed into a good (or at least better) global citizen. It’s also insanely profitable again. In fact, just about the best thing one could have done with one’s pension fund would have been to have put a sizeable chunk of it into Microsoft stock. (The company is now worth a trillion dollars.) And every week a copy of a memo from the company’s President and Chief Legal Counsel, Brad Smith, drops into my inbox. Sometimes it contains useful and civilised ideas. No other corporate bigwig talks as much sense.

How has this transformation come about? This week the Economist has a go at identifying the things that made Gates’s creature a more tolerable behemoth. There are, it says, three lessons the other tech giants could learn from the Redmond experience under Satya Nadella’s leadership:

  1. “First, be prepared to look beyond the golden goose. Microsoft missed social networks and smartphones because of its obsession with Windows, the operating system that was its main moneyspinner. One of Mr Nadella’s most important acts after taking the helm was to deprioritise Windows. More important, he also bet big on the “cloud”—just as firms started getting comfortable with renting computing power. In the past quarter revenues at Azure, Microsoft’s cloud division, grew by 68% year on year, and it now has nearly half the market share of Amazon Web Services, the industry leader.”

  2. “Second, rapaciousness may not pay. Mr Nadella has changed Microsoft’s culture as well as its technological focus. The cult of Windows ordained that customers and partners be squeezed and rivals dispatched, often by questionable means, which led to the antitrust showdown. Mr Nadella’s predecessor called Linux and other open-source software a “cancer”. But today that rival operating system is more widely used on Azure than Windows. And many companies see Microsoft as a much less threatening technology partner than Amazon, which is always looking for new industries to enter and disrupt.”

  3. “Third, work with regulators rather than try to outwit or overwhelm them. From the start Microsoft designed Azure in such a way that it could accommodate local data-protection laws. Its president and chief legal officer, Brad Smith, has been the source of many policy proposals, such as a “Digital Geneva Convention” to protect people from cyber-attacks by nation-states. He is also behind Microsoft’s comparatively cautious use of artificial intelligence, and calls for oversight of facial recognition. The firm has been relatively untouched by the current backlash against tech firms, and is less vulnerable to new regulation.”

Fines don’t work. To control tech companies we have to hit them where it really hurts

Today’s Observer comment piece

If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!

This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…

Read on

Regulating the tech giants

This from Benedict Evan’s invaluable newsletter, written in response to Chris Hughes’s long NYT OpEd arguing that Facebook should be broken up…

I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?

Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.

This is a good summary of why the regulation issue is so perplexing. Our difficulties include the fact that we don’t have an analytical framework yet for (i) analysing the kinds of power wielded by the platforms; (ii) categorising the societal harms which the tech giants might be inflicting; or (iii) understanding how our traditional toolset for dealing with corporate power (competition law, antitrust, etc.) needs to be updated for the contemporary challenges posed by the companies.

Just after I’d read the newsletter, the next item in my inbox contained a link to a Pew survey which revealed the colossal numbers of smartphone users across the world who think they are accessing the Internet when they’re actually just using Facebook or WhatsApp. Interestingly, it’s mostly those who have some experience of hooking up to the Internet via a desktop PC who know that there’s actually a real Internet out there. But if your first experience of Internet connectivity is via a smartphone running the Facebook app (which means that your data may be free), then as far as you are concerned, Facebook is the Internet.

So Facebook has, effectively, blotted out the open Internet for a large segment of humanity. That’s also a new kind of power for which we don’t have — at the moment — a category. Just as the so-called Right to be Forgotten* recognises that Google has the power to render someone invisible. After all, in a networked world, if the dominant search engine doesn’t find you, then effectively you have ceased to exist.


  • It’s not a right to be forgotten, merely a right not to be found by Google’s search engine. The complained-of information remains on the website where it was originally published.

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on

Ben Evans on the DCMS White Paper on Online Harms

From Ben’s weekly newsletter:

The Uk government has released a ‘White Paper’ (consultation prior to legislation) covering the management and take-down of harmful content on social platforms. The idea is to have a list of specific and clearly defined kind of harmful content (child exploitation, promoting terrorism, etc), an obligation on anyone hosting content to have a reasonable and systematic process for finding and removing this, and a penalty regime that is proportionate to the kind of harm (child exploitation is worst), how hard they’d tried to deal with it (the ‘reasonableness’ test), and how big the company is (startups get more leeway on less harmful stuff), with a regulatory body to manage and adjudicate this. The UK attitude is “this is how everything else is regulated, so why should online be any different?” The broader point: FB and Google etc are not in China, but more and more economies where they are present and have to remain will start passing laws, and some of them will mean their global operations might have to change – there will be a lowest common denominator effect. This one tries not to be too prescriptive and tries not to harm startups, but GDPR was the opposite. And, of course, absolutely no-one in the UK (or anywhere else) cares what American lawyers think the American constitution says.

Finally, a government takes on the tech companies

This morning’s Observer column:

On Monday last week, the government published its long-awaited white paper on online harms. It was launched at the British Library by the two cabinet ministers responsible for it – Jeremy Wright of the Department for Digital, Culture, Media and Sport (DCMS) and the home secretary, Sajid Javid. Wright was calm, modest and workmanlike in his introduction. Javid was, well, more macho. The social media companies had had their chances to put their houses in order. “They failed,” he declared. “I won’t let them fail again.” One couldn’t help feeling that he had one eye on the forthcoming hustings for the Tory leadership.

Nevertheless, this white paper is a significant document…

Read on

The dark side of recommendation engines

This morning’s Observer column:

My eye was caught by a headline in Wired magazine: “When algorithms think you want to die”. Below it was an article by two academic researchers, Ysabel Gerrard and Tarleton Gillespie, about the “recommendation engines” that are a central feature of social media and e-commerce sites.

Everyone who uses the web is familiar with these engines. A recommendation algorithm is what prompts Amazon to tell me that since I’ve bought Custodians of the Internet, Gillespie’s excellent book on the moderation of online content, I might also be interested in Safiya Umoja Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism and a host of other books about algorithmic power and bias. In that particular case, the algorithm’s guess is accurate and helpful: it informs me about stuff that I should have known about but hadn’t.

Recommendation engines are central to the “personalisation” of online content and were once seen as largely benign…

Read on

Facebook’s targeting engine: still running smoothly on all cylinders

Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

Note also, that the formulaic Facebook response hasn’t changed either:

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Ah, yes. That ‘broader look’ again.

Facebook: the regulatory noose tightens

This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.

Couldn’t happen to nastier people.

In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.

Think that self-driving cars will eliminate traffic? Think again

Fascinating paper, “The autonomous vehicle parking problem” by Adam Millard-Ball. In it he: identifies and analyzes parking behavior of autonomous vehicles; uses a traffic simulation model to demonstrate how autonomous vehicles can implicitly coordinate to reduce the cost of cruising for parking, through self-generated congestion; discusses policy responses, including congestion pricing; and argues that congestion pricing should include both a time-based charge for occupying the public right-of-way, and a distance- or energy-based charge to internalizes other externalities.

The Abstract reads:

Autonomous vehicles (AVs) have no need to park close to their destination, or even to park at all. Instead, AVs can seek out free on-street parking, return home, or cruise (circle around). Because cruising is less costly at lower speeds, a game theoretic framework shows that AVs also have the incentive to implicitly coordinate with each other in order to generate congestion. Using a traffic microsimulation model and data from downtown San Francisco, this paper suggests that AVs could more than double vehicle travel to, from and within dense, urban cores. New vehicle trips are generated by a 90% reduction in effective parking costs, while existing trips become longer because of driving to more distant parking spaces and cruising. One potential policy response—subsidized peripheral parking—would likely exacerbate congestion through further reducing the cost of driving. Instead, this paper argues that the rise of AVs provides the opportunity and the imperative to implement congestion pricing in urban centers. Because the ability of AVs to cruise blurs the boundary between parking and travel, congestion pricing programs should include two complementary prices—a time-based charge for occupying the public right-of-way, whether parked or in motion, and a distance- or energy-based charge that internalizes other externalities from driving.

What this suggests is that society — in this case city authorities — should think of urban streets as analogous to radio spectrum. We auction rights to communications companies to operate on specific chinks of the radio spectrum. When autonomous vehicles arrive then those who operate them ought to be treated like radio spectrum users. The one tweak we’d need is that AV operators would be charged not only for the right to use a particular slice of the road ‘spectrum’ but also for the amount of use they make of it.