Serial Killers: Moore’s Law and the parallelisation bubble

Cory Doctorow had a thoughtful reaction to Sunday’s Observer column, where I cited Nathan Myhrvold’s Four Laws of Software. “Reading it”, he writes,

made me realize that we were living through a parallel computation bubble. The period in which Moore’s Law had declined also overlapped with the period in which computing came to be dominated by a handful of applications that are famously parallel — applications that have seemed overhyped even by the standards of the tech industry: VR, cryptocurrency mining, and machine learning.

Now, all of these have other reasons to be frothy: machine learning is the ideal tool for empiricism-washing, through which unfair policies are presented as “evidence-based”; cryptocurrencies are just the thing if you’re a grifty oligarch looking to launder your money; and VR is a new frontier for the moribund, hyper-concentrated entertainment industry to conquer.

“Parallelizable problems become hammers in search of nails,” Cory continued in an email:

“If your problem can be decomposed into steps that can be computed independent of one another, we’ve got JUST the thing for you — so, please, tell me about all the problems you have that fit the bill?”

This is arguably part of why we’re living through a cryptocurrency and ML bubble: even though these aren’t solving our most pressing problems, they are solving our most TRACTABLE ones. We’re looking for our keys under the readily computable lamppost, IOW.

Which leads Cory (@doctorow) to this “half-formed thought”: the bubbles in VR, machine learning and cryptocurrency are partly explained by the decline in returns to Moore’s Law, which means that parallelizable problems are cheaper/easier to solve than linear ones.

And wondering what the counterfactual would have been like: if we had found a way of extending Moore’s Law indefinitely.