Despite the hype, AI is stuck

Interesting essay by Gary Marcus. I particularly like this bit:

Although the field of A.I. is exploding with microdiscoveries, progress toward the robustness and flexibility of human cognition remains elusive. Not long ago, for example, while sitting with me in a cafe, my 3-year-old daughter spontaneously realized that she could climb out of her chair in a new way: backward, by sliding through the gap between the back and the seat of the chair. My daughter had never seen anyone else disembark in quite this way; she invented it on her own — and without the benefit of trial and error, or the need for terabytes of labeled data.

Presumably, my daughter relied on an implicit theory of how her body moves, along with an implicit theory of physics — how one complex object travels through the aperture of another. I challenge any robot to do the same. A.I. systems tend to be passive vessels, dredging through data in search of statistical correlations; humans are active engines for discovering how things work.

Marcus thinks that a new paradigm is needed for AI that places “top down” knowledge (cognitive models of the world and how it works) and “bottom up” knowledge (the kind of raw information we get directly from our senses) on equal footing. “Deep learning”, he writes,

“is very good at bottom-up knowledge, like discerning which patterns of pixels correspond to golden retrievers as opposed to Labradors. But it is no use when it comes to top-down knowledge. If my daughter sees her reflection in a bowl of water, she knows the image is illusory; she knows she is not actually in the bowl. To a deep-learning system, though, there is no difference between the reflection and the real thing, because the system lacks a theory of the world and how it works. Integrating that sort of knowledge of the world may be the next great hurdle in A.I., a prerequisite to grander projects like using A.I. to advance medicine and scientific understanding.”

Yep: ‘superintelligence’ is farther away than we think.