Quote of the Day

“Benedict’s Law of Headlines: If an opinion piece uses ‘Artificial Intelligence’ instead of ‘machine learning’, you know in advance that its arguments will be weak.”

Benedict Evans.

Shooting yourself in the behind

Now here is something you could not make up:

The Netherlands’ Defense Safety Inspection Agency (Inspectie Veiligheid Defensie) is investigating an incident during a January military exercise in which a Dutch Air Force F-16 was damaged by live fire from a 20-millimeter cannon—its own 20-millimeter cannon. At least one round fired from the aircraft’s M61A1 Vulcan Gatling gun struck the aircraft as it fired at targets on the Dutch military’s Vliehors range on the island of Vlieland, according to a report from the Netherlands’ NOS news service.

Two F-16s were conducting firing exercises on January 21. It appears that the damaged aircraft actually caught up with the 20mm rounds it fired as it pulled out of its firing run. At least one of them struck the side of the F-16’s fuselage, and parts of a round were ingested by the aircraft’s engine. The F-16’s pilot managed to land the aircraft safely at Leeuwarden Air Base.

The incident reflects why guns on a high-performance jet are perhaps a less than ideal weapon. The Vulcan is capable of firing over 6,000 shots per minute, but its magazine carries only 511 rounds—just enough for five seconds of fury. The rounds have a muzzle velocity of 3,450 feet per second (1050 meters per second). That is speed boosted initially by the aircraft itself, but atmospheric drag slows the shells down eventually. And if a pilot accelerates and maneuvers in the wrong way after firing the cannon, the aircraft could be unexpectedly reunited with its recently departed rounds.

Lovely, isn’t it?

Quote of the Day

“If it’s your algorithm, it’s your responsibility. This is the only way that we can sort of sustain a world where we know who is responsible for what.”

Margrethe Vestager, EU Competition Commissioner

Lessons of Brexit #564

From Jonathan Freedland, pointing out two important things about the EU that Britons might not have appreciated in 2016.

The EU tends to get its way, as it will again next week when it once more dictates extension terms. It’s a big bloc with serious clout, an equal across the table when it faces the world’s other two economic superpowers, China and the US. When Britain comes to negotiate a trade deal with Donald Trump, we’ll get eaten for breakfast – with a side dish of chlorinated chicken. But in the EU, Washington or Beijing meet their match.

The same goes for tackling the other major forces shaping our lives. Last month, the EU fined Google $1.7bn for choking competition in the advertising market. Apple and Facebook are in Brussels’ sights too, as the EU looks to give individuals control over their own data and the money it generates. According to the Economist: “Europe is edging towards cracking the big-tech puzzle.”

If that’s what the EU can achieve as a group, look what it can do for an individual member state. The key obstacle to passage of May’s deal has been the Northern Ireland backstop. Why has that issue persisted? Because the EU has thrown its collective weight behind the border concerns of a single, small member – Ireland. For several centuries, an iron rule of any dispute between Ireland and Britain was that Britain, the bigger nation, would always win. Not any more. Because Ireland is now part of a bigger bloc. The backstop has made vivid what perhaps was abstract in the British imagination: that by pooling together with other nations, a country might give up a modicum of theoretical sovereignty, but it gains a whole lot of practical strength. Britain used to benefit from that obvious fact of geopolitics; now we are suffering from it. In an arm-wrestle with our once-weak neighbour, we are being outmuscled.

The beach… at last

I love the beach at Maghera in Co Donegal. Part of its charm derives from anticipation generated by the walk through sandy dunes before you reach the beach. And then you see this…

Regulation of tech companies: lessons from history

Interesting essay by the economist Kenneth Rogoff. Commenting on the fierce pushback against Senator Elizabeth Warren for daring to suggest that even if many services seem to be provided for free, there might still be something wrong, he writes:

There was the same kind of pushback from the financial sector fifteen years ago, and from the railroads back in the late 1800s. Writing in the March 1881 issue of The Atlantic, the progressive activist Henry Demarest Lloyd warned that,“Our treatment of ‘the railroad problem’ will show the quality and caliber of our political sense. It will go far in foreshadowing the future lines of our social and political growth. It may indicate whether the American democracy, like all the democratic experiments which have preceded it, is to become extinct because the people had not wit enough or virtue enough to make the common good supreme.”Lloyd’s words still ring true today.

At this point, ideas for regulating Big Tech are just sketches, and of course more serious analysis is warranted. An open, informed discussion that is not squelched by lobbying dollars is a national imperative. The debate that Warren has joined is not about whether to establish socialism. It is about making capitalist competition fairer and, ultimately, stronger.

Yep.

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.