Are we really “evolution’s biggest mistake”?

To Corpus Christi for a CSaP lecture by Jaan Tallinn, Chief Engineer of Skype. Since he’s the Estonian programmer behind Kazaa (formerly the scourge of the music industry) and then a lead architect of Skype, I expect him to be talking about VoIP or some such geeky topic. He’s a big name in these circles and he plays to a packed house.

But it turns out that he doesn’t want to talk about geeky stuff and instead launches into a fascinating but wayward excursion into Kurzweil territory. He gets there via an unusual route, though: by arguing that, essentially, the human brain was evolution’s biggest mistake, because it has enabled us to divert the natural course of things with our infernal ingenuity — with potentially disastrous consequences. This is routine stuff for some audiences — for example those who share James Lovelock’s views about global warming. But it’s not CO2 emissions that bother Tallinn: it’s the ‘singularity’ that also obsesses Kurzweil. In other words, he extrapolates the increasing ‘intelligence’ and processing power of computers to the point where we will have created artificial intelligences that are smarter than us and which will have no further use for humans, save perhaps as pets. At which point I hear echoes of Bill Joy’s famous essay, “Why the Future Doesn’t Need Us”, and begin to wonder if this software wizard hasn’t, well, ventured into philosophical territory without even a rudimentary map.

But Tallinn is an entertaining speaker (and the only presenter I’ve ever seen who can actually use Prezi to good effect) so most of us temporarily suspend disbelief and stay connected. His central idea is of an “intelligence stairway” — a series of steps starting with self-replication leading to evolution leading to humans leading to tech progress leading to “artificial general intelligence” (AGI) and thence to an “intelligence explosion” which leads to the Kurzweil Singularity. Tallinn thinks (via reasoning that I can’t follow) that what follows next is “environmental catastrophe”. Is this because machines will be unconcerned about global warming, because they are capable of surviving it whereas organic life is not? Who knows? **See footnote**

The audience is intrigued but unconvinced. One attendee is sceptical to the point almost of derision: he doesn’t buy into Tallinn’s account of computational progress (which lays great stress on computers’ ability to play world-class chess, and he thinks that Tallinn’s citation of Apple’s SIRI as an illustration of how far computers have come in understanding people is way overblown. Another sceptic (I think an economist) takes the line that it’s difficult to see computers being able to understand context, and so the only we need to take in AI research is to make sure that they never do!

I am likewise entertained but unconvinced. But I am struck by one thought, namely that there are areas of scientific research where we do worry about a ‘stairway’ of the kind sketched by Tallinn — biotechnology and genetic engineering in particular, and also perhaps nanotechnology. Maybe we should do some thinking about the work that the 300 or so researchers working on AGI are doing? And is the reason why we don’t take the threat of AGI seriously the fact that, deep down, we simply can’t conceive of machines that are smarter than us? We have no problem envisaging scenarios in which, say, nanotechnology or genetically-modified organisms might run out of control and give rise to horrible unintended consequences. But computing machines…???

But it was an entertaining and thought-provoking lecture. On my way out through the throng of Cambridge academics and geeks engaging in the social activity quaintly known as “networking” I am suddenly struck by a vague memory from my past. I too once gave a lecture to a packed house. The audience appeared to love it and applauded loudly at the end. As I was leaving the theatre I noticed that one of my academic colleagues had been lounging at the back. “Very good lecture”, he said. “Just the right number half-truths”.


My colleague Anil Madhavapeddy was also there and writes:

“He [Tallinn] is falling into his own trap: any sufficiently advanced AI would
maintain itself until it can find a more algorithmically efficient source
of resources than earth (i.e., gas giants! space!) and would not work on
human timescales (whats the rush?).

On the other hand, one can imagine very easily a computing virus such as
Stuxnet II wiping out life on earth, due to it causing some cyberphysical
system to go ballistic and trigger something off by mistake. Not advanced
AI, just plain old insecure computer systems, and this does need fixing
urgently, and the GAI topic is an unfortunate distraction.”