How companies are addressing machine learning

From an O’Reilly newsletter:

In a recent O’Reilly survey, we found that the skills gap remains one of the key challenges holding back the adoption of machine learning. The demand for data skills (“the sexiest job of the 21st century”) hasn’t dissipated—LinkedIn recently found that demand for data scientists in the US is “off the charts,” and our survey indicated that the demand for data scientists and data engineers is strong not just in the US but globally.

With the average shelf life of a skill today at less than five years and the cost to replace an employee estimated at between six and nine months of the position’s salary, there’s increasing pressure on tech leaders to retain and upskill rather than replace their employees in order to keep data projects (such as machine learning implementations) on track. We’re also seeing more training programs aimed at executives and decision makers, who need to understand how these new ML technologies can impact their current operations and products.

Beyond investments in narrowing the skills gap, companies are beginning to put processes in place for their data science projects, for example creating analytics centers of excellence that centralize capabilities and share best practices. Some companies are also actively maintaining a portfolio of use cases and opportunities for ML.

Note the average shelf-life of a skill and then ponder why the UK government is not boosting the Open University.

Reflections on AlphaGoZero

Steven Strogatz in the New York Times:

All of that has changed with the rise of machine learning. By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever. Not only could it have easily defeated all the strongest human masters — it didn’t even bother to try — it crushed Stockfish, the reigning computer world champion of chess. In a hundred-game match against a truly formidable engine, AlphaZero scored twenty-eight wins and seventy-two draws. It didn’t lose a single game.

Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks. In some games it paralyzed Stockfish and toyed with it. While conducting its attack in Game 10, AlphaZero retreated its queen back into the corner of the board on its own side, far from Stockfish’s king, not normally where an attacking queen should be placed.

Yet this peculiar retreat was venomous: No matter how Stockfish replied, it was doomed. It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador. Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.

Hmmm… It’s important to remember that board games are a very narrow domain. In a way it’s not surprising that machines are good at playing them. But it’s undeniable that AlphaGoZero is remarkable.

Microsoft President: It’s time to regulate face-recognition technology

Interesting post by Brad Smith on the company’s Issues blog:

In July, we shared our views about the need for government regulation and responsible industry measures to address advancing facial recognition technology. As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law…

Coincidentally, the New Yorker has an interesting essay — “Should we be worried about computerized facial recognition?”

We already know what it’s like to live under Artificial Intelligences

This morning’s Observer column:

In 1965, the mathematician I J “Jack” Good, one of Alan Turing’s code-breaking colleagues during the second world war, started to think about the implications of what he called an “ultra-intelligent” machine – ie “a machine that can surpass all the intellectual activities of any man, however clever”. If we were able to create such a machine, he mused, it would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.

Note the proviso. Good’s speculation has lingered long in our collective subconscious, occasionally giving rise to outbreaks of fevered speculation. These generally focus on two questions. How long will it take us to create superintelligent machines? And what will it be like for humans to live with – or under – such machines? Will they rapidly conclude that people are a waste of space? Does the superintelligent machine pose an existential risk for humanity?

The answer to the first question can be summarised as “longer than you think”. And as for the second question, well, nobody really knows. How could they? Surely we’d need to build the machines first and then we’d find out. Actually, that’s not quite right. It just so happens that history has provided us with some useful insights into what it’s like to live with – and under – superintelligent machines.

They’re called corporations, and they’ve been around for a very long time – since about 1600, in fact…

Read on

Blind faith in Moore’s Law sometimes leads to dead ends

John Thornhill had an interesting column in the Financial Times the other day (sadly, behind a paywall) about Moore’s Law and the struggles of the tech industry to overcome the physical barriers to its continuance.

This led me to brood on one of the under-discussed aspects of the Law, namely the way it has enabled the AI crowd to dodge really awkward questions for years. It works like this: If the standard-issue AI of a particular moment in time proves unable to perform a particular task or solve a particular problem, then the strategy is to say (confidently): “yes but Moore’s Law will eventually provide the computing power to crack it”.

And sometimes that’s true. The difficulty, though, is that it assumes that all problems are practical — i.e. ones that are ultimately computable. But some tasks/problems are almost certainly not computable. And so there are times when our psychic addiction to Moore’s Law leads us to pursue avenues which are, ultimately, dead ends. But few people dare admit that, especially when hype-storms are blowing furiously.

Posted in AI

Automation isn’t just about technology

This morning’s Observer column:

Ideology is what determines how you think when you don’t know you’re thinking. Neoliberalism is a prime example. Less well-known but equally insidious is technological determinism, which is a theory about how technology affects development. It comes in two flavours. One says that there is an inexorable internal logic in how technologies evolve. So, for example, when we got to the point where massive processing power and large quantities of data became easily available, machine-learning was an inevitable next step.

The second flavour of determinism – the most influential one – takes the form of an unshakable conviction that technology is what really drives history. And it turns out that most of us are infected with this version.

It manifests itself in many ways…

Read on

Managing the future that’s already here

This morning’s Observer column:

As the science fiction novelist William Gibson famously observed: “The future is already here – it’s just not very evenly distributed.” I wish people would pay more attention to that adage whenever the subject of artificial intelligence (AI) comes up. Public discourse about it invariably focuses on the threat (or promise, depending on your point of view) of “superintelligent” machines, ie ones that display human-level general intelligence, even though such devices have been 20 to 50 years away ever since we first started worrying about them. The likelihood (or mirage) of such machines still remains a distant prospect, a point made by the leading AI researcher Andrew Ng, who said that he worries about superintelligence in the same way that he frets about overpopulation on Mars.

That seems about right to me…

Read on

A different way of thinking about thinking

Fascinating interview on Edge.org with Tom Griffiths of Berkeley. For me, the most interesting passage is this:

One of the mysteries of human intelligence is that we’re able to do so much with so little. We’re able to act in ways that are so intelligent despite the fact that we have limited computational resources—basically just the stuff that we can carry around inside our heads. But we’re good at coming up with strategies for solving problems that make the best use of those limited computational resources. You can formulate that as another kind of computational problem in itself.

If you have certain computational resources and certain costs for using them, can you come up with the best algorithm for solving a problem, using those computational resources, trading off the errors you might make and solving the problem with the cost of using the resources you have or the limitations that are imposed upon those resources? That approach gives us a different way of thinking about what constitutes rational behavior.

The classic standard of rational behavior, which is used in economics and which motivated a lot of the human decision-making literature, focused on the idea of rationality in terms of finding the right answer without any thought as to the computational costs that might be involved.

This gives us a more nuanced and more realistic notion of rationality, a notion that is relevant to any organism or machine that faces physical constraints on the resources that are available to it. It says that you are being rational when you’re using the best algorithm to solve the problem, taking into account both your computational limitations and the kinds of errors that you might end up making.

This approach, which my colleague Stuart Russell calls “bounded optimality,” gives us a new way of understanding human cognition. We take examples of things that have been held up as evidence of irrationality, examples of things where people are solving a problem but not doing it in the best way, and we can try and make sense of those. More importantly, it sets up a way of asking questions about how people get to be so smart. How is it that we find those effective strategies? That’s a problem that we call “rational metareasoning.” How should a rational agent who has limitations on their computational resources find the best strategies for using those resources?

Worth reading (or watching or listening to) in full.

I can see the point of trying to understand why humans are so good at some things. The capacity to make rapid causal inferences was probably hardwired into our DNA by evolution — it’s ‘System 1’ in the categorisation proposed in Daniel Kahneman’s book, Thinking Fast and Slow, i.e. a capacity for fast, instinctive and emotional thinking — the kind of thinking that was crucial for survival in primeval times. But the other — equally important — question is why humans seem to be so bad at Kahneman’s ‘System 2’ thinking — i.e. slower, more deliberative and more logical reasoning. Maybe it’s because our evolutionary inheritance was laid down in a simpler era, and we’re just not adapted to handle the complexity with which (as a result of our technological ingenuity) we are now confronted?

This has interesting contemporary resonances: climate change denial, for example; fake news; populism; and the tensions between populism and technocracy.

Posted in AI

Despite the hype, AI is stuck

Interesting essay by Gary Marcus. I particularly like this bit:

Although the field of A.I. is exploding with microdiscoveries, progress toward the robustness and flexibility of human cognition remains elusive. Not long ago, for example, while sitting with me in a cafe, my 3-year-old daughter spontaneously realized that she could climb out of her chair in a new way: backward, by sliding through the gap between the back and the seat of the chair. My daughter had never seen anyone else disembark in quite this way; she invented it on her own — and without the benefit of trial and error, or the need for terabytes of labeled data.

Presumably, my daughter relied on an implicit theory of how her body moves, along with an implicit theory of physics — how one complex object travels through the aperture of another. I challenge any robot to do the same. A.I. systems tend to be passive vessels, dredging through data in search of statistical correlations; humans are active engines for discovering how things work.

Marcus thinks that a new paradigm is needed for AI that places “top down” knowledge (cognitive models of the world and how it works) and “bottom up” knowledge (the kind of raw information we get directly from our senses) on equal footing. “Deep learning”, he writes,

“is very good at bottom-up knowledge, like discerning which patterns of pixels correspond to golden retrievers as opposed to Labradors. But it is no use when it comes to top-down knowledge. If my daughter sees her reflection in a bowl of water, she knows the image is illusory; she knows she is not actually in the bowl. To a deep-learning system, though, there is no difference between the reflection and the real thing, because the system lacks a theory of the world and how it works. Integrating that sort of knowledge of the world may be the next great hurdle in A.I., a prerequisite to grander projects like using A.I. to advance medicine and scientific understanding.”

Yep: ‘superintelligence’ is farther away than we think.

Posted in AI