From the NYT Bits Blog.
The coming sensor innovations, said Bernard Meyerson, an I.B.M. scientist and vice president of innovation, are vital ingredients in what is called cognitive computing. The idea is that in the future computers will be increasingly able to sense, adapt and learn, in their way.
That vision, of course, has been around for a long time — a pursuit of artificial intelligence researchers for decades. But there seem to be two reasons that cognitive computing is something I.B.M., and others, are taking seriously these days. The first is that the vision is becoming increasingly possible to achieve, though formidable obstacles remain. I wrote an article in the Science section last year on I.B.M.’s cognitive computing project.
The other reason is a looming necessity. When I asked Dr. Meyerson why the five-year prediction exercise was a worthwhile use of researchers’ time, he replied that it helped focus thinking. Actually, his initial reply was a techie epigram. “In a nutshell,” he said, “seven nanometers.”
Dr. Meyerson, who has a Ph.D. in solid-state physics, was talking about the physical limits on the width of semiconductor circuits, when they can’t be shrunk any further. (The width of a human hair is roughly 80,000 nanometers.) Today, the most advanced chips have circuits 22 nanometers in width. Next comes 14 nanometers, then 10 and then 7, Dr. Meyerson said.
“We have three more cycles, and then the biggest knobs for improving performance in silicon are gone.” he said. “You have to change the architecture, use a different approach.”
“With a cognitive computer, you train it rather than program it,” Dr. Meyerson said.
Hmmm…