John Thornhill had an interesting column in the Financial Times the other day (sadly, behind a paywall) about Moore’s Law and the struggles of the tech industry to overcome the physical barriers to its continuance.
This led me to brood on one of the under-discussed aspects of the Law, namely the way it has enabled the AI crowd to dodge really awkward questions for years. It works like this: If the standard-issue AI of a particular moment in time proves unable to perform a particular task or solve a particular problem, then the strategy is to say (confidently): “yes but Moore’s Law will eventually provide the computing power to crack it”.
And sometimes that’s true. The difficulty, though, is that it assumes that all problems are practical — i.e. ones that are ultimately computable. But some tasks/problems are almost certainly not computable. And so there are times when our psychic addiction to Moore’s Law leads us to pursue avenues which are, ultimately, dead ends. But few people dare admit that, especially when hype-storms are blowing furiously.