Cryptonomiconomics

My friend Sean French has just finished Neal Stephenson’s Cryptonomicon and he’s (rightly) mightily impressed.

I’ve just finished reading Neal Stephenson’s extraordinary novel, Cryptonomicon, and it’s done my head in. For a start, it’s about half as long again as David Copperfield. I’ve written novels in a shorter time than it took me to read Stephenson’s book. And then, as I read it, I kept asking myself: how does he know all this? It’s obvious that he’s a serious expert on computer technology and the science and history of codes and code-breaking, especially in the second world war. But he knows everything else as well, about wartime Britain, about the wartime Philippines, about submarines, about the technology of tunnelling, about just lots and lots of things.

More important, he deploys all this knowledge in a multi-stranded, multi-charactered, pan-global story of the kind that hasn’t been done much since the Victorians, and the different narratives and characters converge with the most amazing virtuosity. It’s got the wild imagination of Gravity’s Rainbow, with the added attraction – for me, at least – that I almost always understood what was going on.

Lovely post. Wonder if I should point Sean at “In the Beginning Was the Command Line”? Or is that really just for geeks?

We love your work… now show us your workings

This morning’s Observer column.

The growth in computing power, networking and sensor technology now means that even routine scientific research requires practitioners to make sense of a torrent of data. Take, for example, what goes on in particle physics. Experiments in Cern’s Large Hadron Collider regularly produce 23 petabytes per second of data. Just to get that in context, a petabyte is a million gigabytes, which is the equivalent of 13.3 years of HDTV content. In molecular biology, a single DNA-sequencing machine can spew out 9,000 gigabytes of data annually, which a librarian friend of mine equates to 20 Libraries of Congress in a year.

In an increasing number of fields, research involves analysing these torrents of data, looking for patterns or unique events that may be significant. This kind of analysis lies way beyond the capacity of humans, so it has to be done by software, much of which has to be written by the researchers themselves. But when scientists in these fields come to publish their results, both the data and the programs on which they are based are generally hidden from view, which means that a fundamental principle of scientific research – that findings should be independently replicable – is being breached. If you can’t access the data and check the analytical software for bugs, how can you be sure that a particular result is valid?

Flame, Stuxnet and cyberwar

From Good Morning Silicon Valley, citing the Washington Post.

There have been persistent whispers that the United States and Israel collaborated on the Stuxnet worm, which hit the computer systems of a nuclear plant in Iran a few years ago and was discovered in 2010. Earlier this month, spyware dubbed Flame was found on computers in Iran and elsewhere in the Middle East. Security experts have said Stuxnet and Flame have the same creators. Now the Washington Post reports, citing anonymous “Western officials,” that the U.S. and Israel were those creators; that Flame was created first; and that Flame and Stuxnet are part of a broader cyber-sabotage campaign against Iran. That campaign started under President George Bush and is continuing under President Barack Obama, according to a New York Times report earlier this month. (See Burning questions about Flame and cyberwar.) The Washington Post report describes Flame as “among the most sophisticated and subversive pieces of malware to be exposed to date” — a fake Microsoft software update that allows for a computer to be watched and controlled from afar.