The uses of error

When the King’s printer Robert Barker produced a new edition of the King James Bible in 1631, he overlooked three letters from the seventh commandment, producing the startling injunction: ‘Thou shalt commit adultery.’ Barker was fined £300, and spent the rest of his life in debtors’ prison, even while his name remained on imprints. ‘I knew the tyme when great care was had about printing,’ the Archbishop of Canterbury lamented, ‘but now the paper is nought, the composers boyes, and the correctors unlearned.’ Most copies of what became known as the Wicked, Adulterous or Sinners’ Bible were promptly burned, but a few survive as collectors’ items, their value raised immeasurably by Barker’s error: one featured in an exhibition at the Bodleian Library last year about the making of the King James Bible.

Adam Smyth, reviewing Anthony Grafton’s Panizzi Lectures on “The Culture of Correction in Renaissance Europe” in the current issue of The London Review of Books.

Cryptonomiconomics

My friend Sean French has just finished Neal Stephenson’s Cryptonomicon and he’s (rightly) mightily impressed.

I’ve just finished reading Neal Stephenson’s extraordinary novel, Cryptonomicon, and it’s done my head in. For a start, it’s about half as long again as David Copperfield. I’ve written novels in a shorter time than it took me to read Stephenson’s book. And then, as I read it, I kept asking myself: how does he know all this? It’s obvious that he’s a serious expert on computer technology and the science and history of codes and code-breaking, especially in the second world war. But he knows everything else as well, about wartime Britain, about the wartime Philippines, about submarines, about the technology of tunnelling, about just lots and lots of things.

More important, he deploys all this knowledge in a multi-stranded, multi-charactered, pan-global story of the kind that hasn’t been done much since the Victorians, and the different narratives and characters converge with the most amazing virtuosity. It’s got the wild imagination of Gravity’s Rainbow, with the added attraction – for me, at least – that I almost always understood what was going on.

Lovely post. Wonder if I should point Sean at “In the Beginning Was the Command Line”? Or is that really just for geeks?

We love your work… now show us your workings

This morning’s Observer column.

The growth in computing power, networking and sensor technology now means that even routine scientific research requires practitioners to make sense of a torrent of data. Take, for example, what goes on in particle physics. Experiments in Cern’s Large Hadron Collider regularly produce 23 petabytes per second of data. Just to get that in context, a petabyte is a million gigabytes, which is the equivalent of 13.3 years of HDTV content. In molecular biology, a single DNA-sequencing machine can spew out 9,000 gigabytes of data annually, which a librarian friend of mine equates to 20 Libraries of Congress in a year.

In an increasing number of fields, research involves analysing these torrents of data, looking for patterns or unique events that may be significant. This kind of analysis lies way beyond the capacity of humans, so it has to be done by software, much of which has to be written by the researchers themselves. But when scientists in these fields come to publish their results, both the data and the programs on which they are based are generally hidden from view, which means that a fundamental principle of scientific research – that findings should be independently replicable – is being breached. If you can’t access the data and check the analytical software for bugs, how can you be sure that a particular result is valid?