This morning’s Observer column on the EU’s plans for regulating AI and data:
Once you get beyond the mandatory euro-boosting rhetoric about how the EU’s “technological and industrial strengths”, “high-quality digital infrastructure” and “regulatory framework based on its fundamental values” will enable Europe to become “a global leader in innovation in the data economy and its applications”, the white paper seems quite sensible. But as for all documents dealing with how actually to deal with AI, it falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal wellbeing, accountability and that old favourite “transparency”. The only discernible omissions are motherhood and apple pie.
But this is par for the course with AI at the moment: the discourse is invariably three parts generalities, two parts virtue-signalling leavened with a smattering of pious hopes. It’s got to the point where one longs for some plain speaking and common sense.
And, as luck would have it, along it comes in the shape of Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society. He has spent his life trying to teach people how to understand statistical reasoning, and last month published a really helpful article in the Harvard Data Science Review on the question “Should we trust algorithms?”