The repeal of Moore’s Law?

Bob Cringely has an excellent and informative piece about our future in a parallel universe. Well, a parallelized universe, anyway.

In [2002], at Intel's developer conference, chief technology officer Pat Gelsinger said, "We're on track, by 2010, for 30-gigahertz devices, 10 nanometers or less, delivering a tera-instruction of performance." That's one trillion computer instructions per second.

But Gelsinger was wrong. Intel and its competitors are still making processors that top out at less than four gigahertz, and something around five gigahertz has come to be seen, at least for now, as the maximum feasible speed for silicon technology.

It's not as if Moore's Law–the idea that the number of transistors on a chip doubles every two years–has been repealed. Rather, unexpected problems with heat generation and power consumption have put a practical limit on processors' clock speeds, or the rate at which they can execute instructions. New technologies, such as spintronics (which uses the spin direction of a single electron to encode data) and quantum (or tunneling) transistors, may ultimately allow computers to run many times faster than they do now, while using much less power. But those technologies are at least a decade away from reaching the market, and they would require the replacement of semiconductor manufacturing lines that have cost many tens of billions of dollars to build.

So in order to make the most of the technologies at hand, chip makers are taking a different approach. The additional transistors predicted by Moore's Law are being used not to make individual processors run faster but to increase the number of processors inside a chip. Chips with two processors–or "cores"–are now the desktop standard, and four-core chips are increasingly common. In the long term, Intel envisions hundreds of cores per device.

But here's the thing: while the hardware problem of overheating chips lends itself nicely to the hardware solution of multicore computing, that solution gives rise in turn to a tricky software problem. How do you program for multiple processors?

Answer: with great difficulty. The big limitation, in other words, may not come from physics but from our inability to write software that can make use of parallel architectures.