How companies are addressing machine learning

From an O’Reilly newsletter:

In a recent O’Reilly survey, we found that the skills gap remains one of the key challenges holding back the adoption of machine learning. The demand for data skills (“the sexiest job of the 21st century”) hasn’t dissipated—LinkedIn recently found that demand for data scientists in the US is “off the charts,” and our survey indicated that the demand for data scientists and data engineers is strong not just in the US but globally.

With the average shelf life of a skill today at less than five years and the cost to replace an employee estimated at between six and nine months of the position’s salary, there’s increasing pressure on tech leaders to retain and upskill rather than replace their employees in order to keep data projects (such as machine learning implementations) on track. We’re also seeing more training programs aimed at executives and decision makers, who need to understand how these new ML technologies can impact their current operations and products.

Beyond investments in narrowing the skills gap, companies are beginning to put processes in place for their data science projects, for example creating analytics centers of excellence that centralize capabilities and share best practices. Some companies are also actively maintaining a portfolio of use cases and opportunities for ML.

Note the average shelf-life of a skill and then ponder why the UK government is not boosting the Open University.

What the Internet tells us about human nature

This morning’s Observer column:

When the internet first entered public consciousness in the early 1990s, a prominent media entrepreneur described it as a “sit up” rather than a “lean back” medium. What she meant was that it was quite different from TV, which encouraged passive consumption by a species of human known universally as the couch potato. The internet, some of us fondly imagined, would be different; it would encourage/enable people to become creative generators of their own content.

Spool forward a couple of decades and we are sadder and wiser. On any given weekday evening in many parts of the world, more than half of the data traffic on the internet is accounted for by video streaming to couch potatoes worldwide. (Except that many of them may not be sitting on couches, but watching on their smartphones in a variety of locations and postures.) The internet has turned into billion-channel TV.

That explains, for example, why Netflix came from nowhere to be such a dominant company. But although it’s a huge player in the video world, Netflix may not be the biggest. That role falls to something that is rarely mentioned in polite company, namely pornography…

Read on

Reflections on AlphaGoZero

Steven Strogatz in the New York Times:

All of that has changed with the rise of machine learning. By playing against itself and updating its neural network as it learned from experience, AlphaZero discovered the principles of chess on its own and quickly became the best player ever. Not only could it have easily defeated all the strongest human masters — it didn’t even bother to try — it crushed Stockfish, the reigning computer world champion of chess. In a hundred-game match against a truly formidable engine, AlphaZero scored twenty-eight wins and seventy-two draws. It didn’t lose a single game.

Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks. In some games it paralyzed Stockfish and toyed with it. While conducting its attack in Game 10, AlphaZero retreated its queen back into the corner of the board on its own side, far from Stockfish’s king, not normally where an attacking queen should be placed.

Yet this peculiar retreat was venomous: No matter how Stockfish replied, it was doomed. It was almost as if AlphaZero was waiting for Stockfish to realize, after billions of brutish calculations, how hopeless its position truly was, so that the beast could relax and expire peacefully, like a vanquished bull before a matador. Grandmasters had never seen anything like it. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.

Hmmm… It’s important to remember that board games are a very narrow domain. In a way it’s not surprising that machines are good at playing them. But it’s undeniable that AlphaGoZero is remarkable.

The dream of augmentation

This morning’s Observer column:

Engelbart was a visionary who believed that the most effective way to solve problems was to augment human abilities and develop ways of building collective intelligence. Computers, in his view, were “power steering for the mind” – tools for augmenting human capabilities – and this idea of augmentation has been the backbone of the optimistic narrative of the tech industry ever since.

The dream has become a bit tarnished in the last few years, as we’ve learned how data vampires use the technology to exploit us at the same time as they provide free tools for our supposed “augmentation”…

Read on

Conspiracy theories, the Internet and democracy

My OpEd piece from yesterday’s Observer:

Conspiracy theories have generally had a bad press. They conjure up images of eccentrics in tinfoil hats who believe that aliens have landed and the government is hushing up the news. And maybe it’s statistically true that most conspiracy theories belong on the harmless fringe of the credibility spectrum.

On the other hand, the historical record contains some conspiracy theories that have had profound effects. Take the “stab in the back” myth, widely believed in Germany after 1918, which held that the German army did not lose the First World War on the battlefield but was betrayed by civilians on the home front. When the Nazis came to power in 1933 the theory was incorporated in their revisionist narrative of the 1920s: the Weimar Republic was the creation of the “November criminals” who stabbed the nation in the back to seize power while betraying it. So a conspiracy theory became the inspiration for the political changes that led to a second global conflict.

More recent examples relate to the alleged dangers of the MMR jab and other vaccinations and the various conspiracy theories fuelling denial of climate change.

For the last five years, my academic colleagues – historian Richard Evans and politics professor David Runciman – and I have been leading a team of researchers studying the history, nature and significance of conspiracy theories with a particular emphasis on their implications for democracy…

Read on

We already know what it’s like to live under Artificial Intelligences

This morning’s Observer column:

In 1965, the mathematician I J “Jack” Good, one of Alan Turing’s code-breaking colleagues during the second world war, started to think about the implications of what he called an “ultra-intelligent” machine – ie “a machine that can surpass all the intellectual activities of any man, however clever”. If we were able to create such a machine, he mused, it would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”.

Note the proviso. Good’s speculation has lingered long in our collective subconscious, occasionally giving rise to outbreaks of fevered speculation. These generally focus on two questions. How long will it take us to create superintelligent machines? And what will it be like for humans to live with – or under – such machines? Will they rapidly conclude that people are a waste of space? Does the superintelligent machine pose an existential risk for humanity?

The answer to the first question can be summarised as “longer than you think”. And as for the second question, well, nobody really knows. How could they? Surely we’d need to build the machines first and then we’d find out. Actually, that’s not quite right. It just so happens that history has provided us with some useful insights into what it’s like to live with – and under – superintelligent machines.

They’re called corporations, and they’ve been around for a very long time – since about 1600, in fact…

Read on

Are humans smarter than frogs?

This morning’s Observer column:

And then the penny dropped (I am slow on the uptake). I realised that what I had been doing was adding to a dataset for training the machine-learning software that guides self-driving cars – probably those designed and operated by Waymo, the autonomous vehicle project owned by Alphabet Inc (which also happens to own Google). So, to gain access to an automated service that will benefit financially from my input, I first have to do some unpaid labour to help improve the performance of Waymo’s vehicles (which, incidentally, will be publicly available for hire in Phoenix, Arizona, by the end of this year).

Neat, eh? But note also the delicious additional irony that the Captcha is described as an “automated Turing test”. The Turing test was conceived, you may recall, as a way of enabling humans to determine whether a machine could respond in such a way that one couldn’t tell whether it was a human or a robot. So we have wandered into a topsy-turvy world in which machines make us jump through hoops to prove that we are humans!

The strangest aspect of this epochal shift is how under-discussed it has been…

Read on

The perniciousness of online EULAs

(That’s those click-to-agree buttons that users of free services invariably accept.)

“In theory, contract law enables and ought to enable people, first, to exercise their will freely in pursuit of their own ends and, second, to relate to others freely in pursuit of cooperative ends. In practice, electronic contracting threatens autonomy and undermines the development of meaningful relationships built on trust. Optimised to minimise transaction costs, maximise efficiency, minimise deliberation, and engineer complacency, the electronic contracting architecture nudges people to click a button and behave like simple stimulus-response machines.”

Brett Frischmann, co-author of Re-engineering Humanity in an interview with the Economist.

Our new bi-polar world

This morning’s Observer column:

What the Chinese have discovered, in other words, is that digital technology – which we once naively believed would be a force for democratisation – is also a perfect tool for social control. It’s the operating system for networked authoritarianism. Last month, James O’Malley, a British journalist, was travelling on the Beijing-Shanghai bullet train when his reverie was interrupted by this announcement: “Dear passengers, people who travel without a ticket, or behave disorderly, or smoke in public areas, will be punished according to regulations and the behaviour will be recorded in individual credit information system. To avoid a negative record of personal credit please follow the relevant regulations and help with the orders on the train and at the station.” Makes you nostalgic for those announcements about “arriving at King’s Cross, where this train terminates”, doesn’t it?

Read on