The real test of an AI machine? When it can admit to not knowing something

This morning’s Observer column on the EU’s plans for regulating AI and data:

Once you get beyond the mandatory euro-boosting rhetoric about how the EU’s “technological and industrial strengths”, “high-quality digital infrastructure” and “regulatory framework based on its fundamental values” will enable Europe to become “a global leader in innovation in the data economy and its applications”, the white paper seems quite sensible. But as for all documents dealing with how actually to deal with AI, it falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal wellbeing, accountability and that old favourite “transparency”. The only discernible omissions are motherhood and apple pie.

But this is par for the course with AI at the moment: the discourse is invariably three parts generalities, two parts virtue-signalling leavened with a smattering of pious hopes. It’s got to the point where one longs for some plain speaking and common sense.

And, as luck would have it, along it comes in the shape of Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society. He has spent his life trying to teach people how to understand statistical reasoning, and last month published a really helpful article in the Harvard Data Science Review on the question “Should we trust algorithms?”

Read on

Some historical perspective on the dominance of current tech giants

From this week’s Economist:

As big tech’s scope expands, more non-tech firms will find their profits dented and more workers will see their livelihoods disrupted, creating angry constituencies. One crude measure of scale is to look at global profits relative to American GDP. By this yardstick, Apple, which is expanding into services, is already roughly as big as Standard Oil and US Steel were in 1910, at the height of their powers. Alphabet, Amazon and Microsoft are set to reach the threshold within the next ten years.

Remember what happened to Standard Oil and US Steel?

If tech companies think they’re states, then they should accept the same responsibilities as states

It’s amazing to watch the deluded fantasies of tech bosses about their importance. In part, this is because their pretensions are taken seriously by political leaders who should know better. The daftest move thus far in this context was the Danish government’s decision in 2017 to appoint an ‘ambassador’ to the tech companies in Silicon Valley, but it’s clear that some other administrations share the same delusions.

Marietje Schaake, the former MEP who is now International policy director at Stanford’s Cyber Policy Center, has noted this too.

Last month, Microsoft announced it would open a “representation to the UN”, while at the same time recruiting a diplomat to run its European public affairs office. Alibaba has proposed a cross-border, online free trade platform. When Facebook’s suggestion of a “supreme court” to revisit controversial content moderation decisions was criticised, it relabelled the initiative an “oversight board”. It seems tech executives are literally trying to take seats at the table that has thus far been shared by heads of state.

At the annual security conference in Munich, presidents, prime ministers and politicians usually share the sought-after stage to engage in conversations about conflict in the Middle East, the future of the EU, or transatlantic relations. This year, executives of Alphabet, Facebook and Microsoft were added to the speakers list.

Facebook boss Mark Zuckerberg went on from Munich to Brussels to meet with EU commissioners about a package of regulatory initiatives on artificial intelligence, data and digital services. Commissioner Thierry Breton provided the apt reminder that companies must follow EU regulations — not the other way around.

In a brisk OpEd piece in yesterday’s Financial Times, Schaake reminds tech bosses that if they really want change, there is no need to wait for government regulation to guide them in the right direction. (Which is their current mantra.) They own and totally control their own platforms. They can start in their own “republics” today. Nothing stops them proactively aligning their terms of use with human rights, democratic principles and the rule of law. When they deploy authoritarian models of governing, they should be called out. “Instead of playing government”, she writes,

they should take responsibility for their own territories. This means anchoring terms of use and standards in the rule of law and democratic principles and allowing independent scrutiny from researchers, regulators and democratic representatives alike. Credible accountability is always independent. It is time to ensure such oversight is proportionate to the power of tech giants.

Companies seeking to democratise would also have to give their employees and customers more of a say, as prime “constituents”. If leaders are serious about their state-like powers, they must walk the walk and treat consumers as citizens. Until then, calls for regulations will be seen as opportunistic, and corporations unfit to lead.

Bravo! Couldn’t have put it better myself.

Regulatory puzzles

Interesting conundrum in Ben Evans’s weekly newsletter:

A German court has banned Uber for not applying with taxi regulations; conversely, AirBNB won in France: it can’t be forced to be regulated as an estate agent. The endless ‘software eats the world’ question: how far do we treat a new way of doing X in the same way as the old one? Uber is clearly a different way of doing what we previously called taxis and ‘limousines’ and should probably be subject to the same high-level policy objectives. (You might be able to achieve those objectives differently – you don’t need a physical meter to have a guaranteed fare because GPS can do that – but the objectives might not change.) On the other hand, AirBNB is not doing the same things that a conventional real estate agent (or hotel) does ‘but with an app and with GPS’ – it’s doing something different, and poses different questions (which might or might not require new regulation).

There’s no single regulatory bullet. It’s horses for courses.

What took governments so long to wake up to the tech giants’ power?

Interesting NYT column by Kara Swisher:

Here’s a little quiz. When was the last time a significant social media network was founded in the United States? And what about a competitive search engine company? An online ad network? And what about a truly wide-ranging e-commerce start-up?

Here are the depressing answers. The social network Snapchat, in 2011. For search, Microsoft’s Bing appeared in 2009, a replacement for its Live Search. I’m drawing a blank on an ad network. With e-commerce, the answer is probably Wayfair, which arrived in 2002, and still has only 1.3 percent of the market (most retail innovation has been in niche areas, like luggage (Away) or special fashion (The RealReal)).

To put this another way: Facebook and its Instagram unit have close to 50 percent of the social media market, dwarfing all the other companies in monthly active users tenfold. Google has about 90 percent of the search market, with Bing and Yahoo dwindling ever further behind by the month. Google and Facebook also suck up 60 percent of the digital ad spend, with only Amazon moving up aggressively in that fast-growing space. And speaking of Amazon, the retail giant has about 50 percent of total e-commerce sales in the United States, with eBay and Walmart at 7 percent and 4 percent, respectively.

Finally, it looks as though the US government is beginning to think that there might be something wrong here. Which prompts three questions:

  1. What took them so long? Was it just that they were still in thrall to Robert Bork’s The Antitrust paradox?
  2. Have they left it too late?
  3. And how do you punish companies that can absorb a $5B fine without missing a beat?

(Interestingly, Amazon.co.uk is currently selling a paperback copy of Bork’s book for £207.02!)

Microsoft 2.0

One of the most remarkable aspects of the present is the way one tech giant has become a reformed character. Microsoft — the rapacious, bullying monster of Bill Gates’s heyday — has morphed into a good (or at least better) global citizen. It’s also insanely profitable again. In fact, just about the best thing one could have done with one’s pension fund would have been to have put a sizeable chunk of it into Microsoft stock. (The company is now worth a trillion dollars.) And every week a copy of a memo from the company’s President and Chief Legal Counsel, Brad Smith, drops into my inbox. Sometimes it contains useful and civilised ideas. No other corporate bigwig talks as much sense.

How has this transformation come about? This week the Economist has a go at identifying the things that made Gates’s creature a more tolerable behemoth. There are, it says, three lessons the other tech giants could learn from the Redmond experience under Satya Nadella’s leadership:

  1. “First, be prepared to look beyond the golden goose. Microsoft missed social networks and smartphones because of its obsession with Windows, the operating system that was its main moneyspinner. One of Mr Nadella’s most important acts after taking the helm was to deprioritise Windows. More important, he also bet big on the “cloud”—just as firms started getting comfortable with renting computing power. In the past quarter revenues at Azure, Microsoft’s cloud division, grew by 68% year on year, and it now has nearly half the market share of Amazon Web Services, the industry leader.”

  2. “Second, rapaciousness may not pay. Mr Nadella has changed Microsoft’s culture as well as its technological focus. The cult of Windows ordained that customers and partners be squeezed and rivals dispatched, often by questionable means, which led to the antitrust showdown. Mr Nadella’s predecessor called Linux and other open-source software a “cancer”. But today that rival operating system is more widely used on Azure than Windows. And many companies see Microsoft as a much less threatening technology partner than Amazon, which is always looking for new industries to enter and disrupt.”

  3. “Third, work with regulators rather than try to outwit or overwhelm them. From the start Microsoft designed Azure in such a way that it could accommodate local data-protection laws. Its president and chief legal officer, Brad Smith, has been the source of many policy proposals, such as a “Digital Geneva Convention” to protect people from cyber-attacks by nation-states. He is also behind Microsoft’s comparatively cautious use of artificial intelligence, and calls for oversight of facial recognition. The firm has been relatively untouched by the current backlash against tech firms, and is less vulnerable to new regulation.”

Fines don’t work. To control tech companies we have to hit them where it really hurts

Today’s Observer comment piece

If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!

This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…

Read on

Regulating the tech giants

This from Benedict Evan’s invaluable newsletter, written in response to Chris Hughes’s long NYT OpEd arguing that Facebook should be broken up…

I think there are two sets of issues to consider here. First, when we look at Google, Facebook, Amazon and perhaps Apple, there’s a tendency to conflate concerns about the absolute size and market power of these companies (all of which are of course debatable) with concerns about specific problems: privacy, radicalization and filter bubbles, spread of harmful content, law enforcement access to encrypted messages and so on, all the way down to very micro things like app store curation. Breaking up Facebook by splitting off Instagram and WhatsApp would reduce its market power, but would have no effect at all on rumors spreading on WhatsApp, school bullying on Instagram or abusive content in the newsfeed. In the same way, splitting Youtube apart from Google wouldn’t solve radicalization. So which problem are you trying to solve?

Second, anti-trust theory, on both the diagnosis side and the remedy side, seems to be flummoxed when faced by products that are free or as cheap as possible, and that do not rely on familiar kinds of restrictive practices (the tying of Standard Oil) for their market power. The US in particular has tended to focus exclusively on price, where the EU has looked much more at competition, but neither has a good account of what exactly is wrong with Amazon (if anything – and of course it is still less than half the size of Walmart in the USA), or indeed with Facebook. Neither is there a robust theory of what, specifically, to do about it. ‘Break them up’ seems to come more from familiarity than analysis: it’s not clear how much real effect splitting off IG and WA would have on the market power of the core newsfeed, and Amazon’s retail business doesn’t have anything to split off (and no, AWS isn’t subsidizing it). We saw the same thing in Elizabeth Warren’s idea that platform owners can’t be on their own platform – which would actually mean that Google would be banned from making Google Maps for Android. So, we’ve got to the point that a lot of people want to do something, but not really, much further.

This is a good summary of why the regulation issue is so perplexing. Our difficulties include the fact that we don’t have an analytical framework yet for (i) analysing the kinds of power wielded by the platforms; (ii) categorising the societal harms which the tech giants might be inflicting; or (iii) understanding how our traditional toolset for dealing with corporate power (competition law, antitrust, etc.) needs to be updated for the contemporary challenges posed by the companies.

Just after I’d read the newsletter, the next item in my inbox contained a link to a Pew survey which revealed the colossal numbers of smartphone users across the world who think they are accessing the Internet when they’re actually just using Facebook or WhatsApp. Interestingly, it’s mostly those who have some experience of hooking up to the Internet via a desktop PC who know that there’s actually a real Internet out there. But if your first experience of Internet connectivity is via a smartphone running the Facebook app (which means that your data may be free), then as far as you are concerned, Facebook is the Internet.

So Facebook has, effectively, blotted out the open Internet for a large segment of humanity. That’s also a new kind of power for which we don’t have — at the moment — a category. Just as the so-called Right to be Forgotten* recognises that Google has the power to render someone invisible. After all, in a networked world, if the dominant search engine doesn’t find you, then effectively you have ceased to exist.


  • It’s not a right to be forgotten, merely a right not to be found by Google’s search engine. The complained-of information remains on the website where it was originally published.

Toxic tech?

This morning’s Observer column:

The headline above an essay in a magazine published by the Association of Computing Machinery (ACM) caught my eye. “Facial recognition is the plutonium of AI”, it said. Since plutonium – a by-product of uranium-based nuclear power generation – is one of the most toxic materials known to humankind, this seemed like an alarmist metaphor, so I settled down to read.

The article, by a Microsoft researcher, Luke Stark, argues that facial-recognition technology – one of the current obsessions of the tech industry – is potentially so toxic for the health of human society that it should be treated like plutonium and restricted accordingly. You could spend a lot of time in Silicon Valley before you heard sentiments like these about a technology that enables computers to recognise faces in a photograph or from a camera…

Read on