Regulation of tech companies: lessons from history

Interesting essay by the economist Kenneth Rogoff. Commenting on the fierce pushback against Senator Elizabeth Warren for daring to suggest that even if many services seem to be provided for free, there might still be something wrong, he writes:

There was the same kind of pushback from the financial sector fifteen years ago, and from the railroads back in the late 1800s. Writing in the March 1881 issue of The Atlantic, the progressive activist Henry Demarest Lloyd warned that,“Our treatment of ‘the railroad problem’ will show the quality and caliber of our political sense. It will go far in foreshadowing the future lines of our social and political growth. It may indicate whether the American democracy, like all the democratic experiments which have preceded it, is to become extinct because the people had not wit enough or virtue enough to make the common good supreme.”Lloyd’s words still ring true today.

At this point, ideas for regulating Big Tech are just sketches, and of course more serious analysis is warranted. An open, informed discussion that is not squelched by lobbying dollars is a national imperative. The debate that Warren has joined is not about whether to establish socialism. It is about making capitalist competition fairer and, ultimately, stronger.

Yep.

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.