Common sense on AI

Interesting responses from Stuart Russell in an World Economic Forum interview:

Are robots taking over the world?

SR: There are three timescales and three versions of this question, and the answers are “Not if I can help it”, “Quite possibly, but hopefully in a good way” and “We would be crazy to be complacent on this issue”. In the near term, autonomous weapons in the hands of unpleasant humans are a real threat, the UN is working (slowly) towards a treaty banning them, and our council has been active in building support for a treaty within the profession and in the media. In the medium term, will robots take away all of our jobs? Some experts say yes, and economists recommend more unemployment insurance as the solution. Better ideas wanted!

But the real world-changing questions are further off, when, after several intrinsically unpredictable breakthroughs, we have human-level or superhuman AI. See, for example, Elon Musk’s comment that superintelligent AI poses the greatest existential threat to the survival of the human race. His point was that regulatory oversight at a national and international level is needed to responsibly develop technology. In my view it’s too soon to start designing regulations – on equations?? – but not too soon to start solving the technical questions of how to maintain absolute control over increasingly intelligent machines.