”To ask whether computers can think is like asking whether submarines can swim”
This morning’s Observer column:
Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.
The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”
Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust…
The Reuters Institute In Oxford has just published a really valuable study of how AI is covered in mainstream media, based on an analysis of eight months of reporting on AI in six mainstream UK news outlets.
The study’s basic conclusion is that UK media coverage of artificial intelligence is dominated by industry products, announcements and research. Coverage frequently amplifies self-interested assertions of AI’s value and potential, while positioning the technology primarily as a private commercial concern and undercutting the role of public action in addressing AI.
Nearly 60% of articles were focused on new industry products, announcements and initiatives that include AI, from smart phones or running shoes, to sex robots or brain preservation. Outlets also regularly covered industry promotional events, start-ups, buyouts, investments, and conferences.
One third (33%) of articles were based on industry sources – mostly CEOs or other senior executives – six times as many as those from government and nearly twice as many as those from academia.
12% of articles referenced the technology entrepreneur, Elon Musk.
AI products are often portrayed as a relevant and competent solution to a range of public problems, from cancer and renewable energy, to coffee delivery. Journalists or commentators rarely question whether AI-containing technologies are the best solutions to such problems or acknowledge ongoing debates concerning AI’s potential effects.
Media coverage of AI is being politicised: right-leaning news outlets highlight issues of economics and geopolitics; left-leaning news outlets highlight issues of ethics, including discrimination, algorithmic bias and privacy.
The report’s lead author, J. Scott Brennen, observed that
“by amplifying industry’s self-interested claims about AI, media coverage presents AI as a solution to a range of problems that will disrupt nearly all areas of our lives, often without acknowledging ongoing debates concerning AI’s potential effects. In this way, coverage also positions AI mostly as a private commercial concern and undercuts the role and potential of public action in addressing this emerging public issue.”
That sounds just about right to me. This is a terrific piece of work.
From an O’Reilly newsletter:
In a recent O’Reilly survey, we found that the skills gap remains one of the key challenges holding back the adoption of machine learning. The demand for data skills (“the sexiest job of the 21st century”) hasn’t dissipated—LinkedIn recently found that demand for data scientists in the US is “off the charts,” and our survey indicated that the demand for data scientists and data engineers is strong not just in the US but globally.
With the average shelf life of a skill today at less than five years and the cost to replace an employee estimated at between six and nine months of the position’s salary, there’s increasing pressure on tech leaders to retain and upskill rather than replace their employees in order to keep data projects (such as machine learning implementations) on track. We’re also seeing more training programs aimed at executives and decision makers, who need to understand how these new ML technologies can impact their current operations and products.
Beyond investments in narrowing the skills gap, companies are beginning to put processes in place for their data science projects, for example creating analytics centers of excellence that centralize capabilities and share best practices. Some companies are also actively maintaining a portfolio of use cases and opportunities for ML.
Note the average shelf-life of a skill and then ponder why the UK government is not boosting the Open University.
Steven Strogatz in the New York Times:
All of that has changed with the rise of machine learning. By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever. Not only could it have easily defeated all the strongest human masters — it didn’t even bother to try — it crushed Stockfish, the reigning computer world champion of chess. In a hundred-game match against a truly formidable engine, AlphaZero scored twenty-eight wins and seventy-two draws. It didn’t lose a single game.
Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks. In some games it paralyzed Stockfish and toyed with it. While conducting its attack in Game 10, AlphaZero retreated its queen back into the corner of the board on its own side, far from Stockfish’s king, not normally where an attacking queen should be placed.
Yet this peculiar retreat was venomous: No matter how Stockfish replied, it was doomed. It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador. Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.
Hmmm… It’s important to remember that board games are a very narrow domain. In a way it’s not surprising that machines are good at playing them. But it’s undeniable that AlphaGoZero is remarkable.
Interesting post by Brad Smith on the company’s Issues blog:
In July, we shared our views about the need for government regulation and responsible industry measures to address advancing facial recognition technology. As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.
We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.
In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law…
Coincidentally, the New Yorker has an interesting essay — “Should we be worried about computerized facial recognition?”
Nice Bloomberg profile of Geoff Hinton, who is — fittingly — the great-great-grandson of the logician George Boole whose work eventually became one of the foundations of modern computer science.
This morning’s Observer column:
In 1965, the mathematician I J “Jack” Good, one of Alan Turing’s code-breaking colleagues during the second world war, started to think about the implications of what he called an “ultra-intelligent” machine – ie “a machine that can surpass all the intellectual activities of any man, however clever”. If we were able to create such a machine, he mused, it would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.
Note the proviso. Good’s speculation has lingered long in our collective subconscious, occasionally giving rise to outbreaks of fevered speculation. These generally focus on two questions. How long will it take us to create superintelligent machines? And what will it be like for humans to live with – or under – such machines? Will they rapidly conclude that people are a waste of space? Does the superintelligent machine pose an existential risk for humanity?
The answer to the first question can be summarised as “longer than you think”. And as for the second question, well, nobody really knows. How could they? Surely we’d need to build the machines first and then we’d find out. Actually, that’s not quite right. It just so happens that history has provided us with some useful insights into what it’s like to live with – and under – superintelligent machines.
They’re called corporations, and they’ve been around for a very long time – since about 1600, in fact…
John Thornhill had an interesting column in the Financial Times the other day (sadly, behind a paywall) about Moore’s Law and the struggles of the tech industry to overcome the physical barriers to its continuance.
This led me to brood on one of the under-discussed aspects of the Law, namely the way it has enabled the AI crowd to dodge really awkward questions for years. It works like this: If the standard-issue AI of a particular moment in time proves unable to perform a particular task or solve a particular problem, then the strategy is to say (confidently): “yes but Moore’s Law will eventually provide the computing power to crack it”.
And sometimes that’s true. The difficulty, though, is that it assumes that all problems are practical — i.e. ones that are ultimately computable. But some tasks/problems are almost certainly not computable. And so there are times when our psychic addiction to Moore’s Law leads us to pursue avenues which are, ultimately, dead ends. But few people dare admit that, especially when hype-storms are blowing furiously.
This morning’s Observer column:
Ideology is what determines how you think when you don’t know you’re thinking. Neoliberalism is a prime example. Less well-known but equally insidious is technological determinism, which is a theory about how technology affects development. It comes in two flavours. One says that there is an inexorable internal logic in how technologies evolve. So, for example, when we got to the point where massive processing power and large quantities of data became easily available, machine-learning was an inevitable next step.
The second flavour of determinism – the most influential one – takes the form of an unshakable conviction that technology is what really drives history. And it turns out that most of us are infected with this version.
It manifests itself in many ways…