Enlightenment, what enlightenment?

Sam Moyn is not impressed by Steven Pinker’s new book – Enlightenment Now:

In laying out his vision of betterment in Enlightenment Now, Pinker confronts alternative trends and looming threats for progress only in order to brush them off. He does not take seriously the risk of major catastrophes, such as the collapse of a recent era of peace or the outbreak of a global pandemic, which he believes is easy to magnify beyond reason. As for environmental degradation, humanity will surely find a way to counteract this in time. “As the world has gotten richer,” Pinker explains, “nature has begun to rebound”—as if the failure of a few prophecies of ecological disaster to come to pass on schedule means the planet is infinitely resilient. Once he gets around to acknowledging that climate change is an actual problem, Pinker spends much of his time attacking “climate justice warriors” for their anti-capitalist hysteria.

Lots more in that sceptical vein. Worth reading in full.

Stephen Hawking RIP

I didn’t know Stephen Hawking personally, though I often saw him around and in my early years in Cambridge (late-1960s, long before he was famous) my lab was in the same complex of buildings in Mill Lane as the department where he worked. The buildings had no ramps for wheelchair access at that time, so sometimes I or my fellow-students would help his wife to lift his (non-motorised) wheelchair up the steps into the Department of Applied Mathematics and Theoretical Physics (DAMTP).

At the time, of course, we had no idea of how important he was destined to be. The only clue was when one went into the DAMTP tea-room at 10.30am or 3.30pm (scientific departments have a tradition of gathering for morning coffee and afternoon tea) his wheelchair was always surrounded by a group of devoted graduate students, some of whom acted as his interpreter and wrote the equations on a blackboard when he was giving a lecture. It was clear then that — at least in the rarefied world of cosmologists — he was already a real celebrity.

One of those students was Nathan Myhrvold, who went on to become the Chief Technology Officer of Microsoft and a close colleague of Bill Gates. My hunch is that Nathan was the link that persuaded Gates to endow the Gates Scholars (which is Cambridge’s version of Oxford’s Rhodes Scholars scheme).

For me, the most striking moment in Hawking’s career was when he was elected to the Lucasian Professorship of Mathematics. This was the professorial chair that had once been occupied by Isaac Newton, and it seemed an appropriate recognition of the significance of Hawking’s work.

Indeed, watching Hawking in public and marvelling at his astonishing and (to me) inaccessible brilliance, it was Newton who came to mind, and the statue of him in the chapel of Trinity College, of which Wordsworth wrote in The Prelude:

And from my pillow, looking forth by light
Of moon or favouring stars, I could behold
The Antechapel where the Statue stood
Of Newton, with his prism and his silent face,
The marble index of a Mind for ever
Voyaging through strange seas of Thought, alone.

Fixing the future?

My Observer review of Andrew Keen’s How to Fix the Future: Staying Human in the Digital Age:

Many years ago the cultural critic Neil Postman predicted that the future of humanity lay somewhere in the area between the dystopian nightmares of two English writers – George Orwell and Aldous Huxley. Orwell believed that we would be destroyed by the things we fear – surveillance and thought-control; Huxley thought that our undoing would be the things that delight us – that our rulers would twig that entertainment is more efficient than coercion as a means of social control.

Then we invented the internet, a technology that – it turned out – gave us both nightmares at once: comprehensive surveillance by states and corporations on the one hand; and, on the other, a strange kind of passive addiction to devices, apps and services which, like the drug soma in Huxley’s Brave New World, possess “all the advantages of Christianity and alcohol and none of their defects”.

The great irony, of course, is that not all of this was inevitable…

Read on

That ‘tulip-mania’ meme…

Historians are such spoilsports: they undermine stories that are too good to check. Consider this distressing piece by Anne Goldgar:

Tulip mania was irrational, the story goes. Tulip mania was a frenzy. Everyone in the Netherlands was involved, from chimney-sweeps to aristocrats. The same tulip bulb, or rather tulip future, was traded sometimes 10 times a day. No one wanted the bulbs, only the profits – it was a phenomenon of pure greed. Tulips were sold for crazy prices – the price of houses – and fortunes were won and lost. It was the foolishness of newcomers to the market that set off the crash in February 1637. Desperate bankrupts threw themselves in canals. The government finally stepped in and ceased the trade, but not before the economy of Holland was ruined.

Trouble is, the story is mostly bunkum. Detailed excavations in Dutch archives for her book — Tulipmania: Money, Honor and Knowledge in the Dutch Golden Age— failed to find much evidence for the ‘mania’ beloved of us commentators.

Tulip mania wasn’t irrational. Tulips were a newish luxury product in a country rapidly expanding its wealth and trade networks. Many more people could afford luxuries – and tulips were seen as beautiful, exotic, and redolent of the good taste and learning displayed by well-educated members of the merchant class. Many of those who bought tulips also bought paintings or collected rarities like shells.

Prices rose, because tulips were hard to cultivate in a way that brought out the popular striped or speckled petals, and they were still rare. But it wasn’t irrational to pay a high price for something that was generally considered valuable, and for which the next person might pay even more.

And it wasn’t a ‘frenzy’ either.

Tulip mania wasn’t a frenzy, either. In fact, for much of the period trading was relatively calm, located in taverns and neighbourhoods rather than on the stock exchange. It also became increasingly organised, with companies set up in various towns to grow, buy, and sell, and committees of experts emerged to oversee the trade. Far from bulbs being traded hundreds of times, I never found a chain of buyers longer than five, and most were far shorter.

Oh – and she found no records of anyone throwing themselves into canals.

Sigh. The slaughter of a beautiful meme by ugly facts.

Lifetime achievement

My friend Quentin has — deservedly — been given a Lifetime Achievement Award (called a Lovie after Ada Lovelace) for inventing the webcam. Here’s the presentation speech by Sophie Wilson (who designed the instruction set for the ARM processor and so also helped to shape our networked world):

And here is Quentin’s acceptance speech. He must have been moved by the award, because he briefly blanks as he’s getting into his stride. Normally, he’s the most fluent speaker I know. But note his graceful and witty recovery, once he’s found his notes.

This is IMHO long-overdue recognition for a technology pioneer.

The education of Mark Zuckerberg

This morning’s Observer column:

One of my favourite books is The Education of Henry Adams (published in 1918). It’s an extended meditation, written in old age by a scion of one of Boston’s elite families, on how the world had changed in his lifetime, and how his formal education had not prepared him for the events through which he had lived. This education had been grounded in the classics, history and literature, and had rendered him incapable, he said, of dealing with the impact of science and technology.

Re-reading Adams recently left me with the thought that there is now an opening for a similar book, The Education of Mark Zuckerberg. It would have an analogous theme, namely how the hero’s education rendered him incapable of understanding the world into which he was born. For although he was supposed to be majoring in psychology at Harvard, the young Zuckerberg mostly took computer science classes until he started Facebook and dropped out. And it turns out that this half-baked education has left him bewildered and rudderless in a culturally complex and politically polarised world…

Read on

Sixty years on

Today is the 60th anniversary of the day that the Soviet Union announced that it had launched a satellite — Sputnik — in earth orbit. The conventional historical narrative (as recounted, for example, in my book and in Katie Hafner and Matthew Lyon’s history) is that this event really alarmed the American public, not least because it suggested that the Soviet Union might be superior to the US in important fields like rocketry and ballistic missiles. The narrative goes on to recount that the shock resulted in a major shake-up in the US government which — among other things — led to the setting up of ARPA — the Advanced Research Projects Agency — in the Pentagon. This was the organisation which funded the development of ARPANET, the packet-switched network that was the precursor of the Internet.

The narrative is accurate in that Sputnik clearly provided the impetus for a drive to produce a massive increase in US capability in science, aerospace technology and computing. But the declassification of a trove of hitherto-secret CIA documents (for example, this one) to mark the anniversary suggests that the CIA was pretty well-informed about Soviet capabilities and intentions and that the launch of a satellite was expected, though nobody could guess at the timing. So President Eisenhower and the US government were not as shocked as the public, and they clearly worked on the principle that one should never waste a good crisis.

Lessons of history?

This morning’s Observer column:

The abiding problem with writing about digital technology is how to avoid what the sociologist Michael Mann calls “the sociology of the last five minutes”. There’s something about the technology that reduces our collective attention span to that of newts. This is how we wind up obsessing over the next iPhone, the travails of Uber, Facebook being weaponised by Russia, Samsung’s new non-combustible smartphone and so on. It’s mostly a breathless search for what Michael Lewis once called “the new new thing”.

We have become mesmerised by digital technology and by the companies that control and exploit it. Accordingly, we find it genuinely difficult to judge whether a particular development is really something new and unprecedented or just a contemporary variant on something that is much older…

Read on

Citizen Jane

Last night we went to see Matt Tyrnauer’s documentary about the legendary battle between the journalist, author and activist Jane Jacobs and the famous (or infamous, depending on your point of view) New York city planner, Robert Moses.

In the scale of his ambition to impose modernist order on the chaos of the city, Moses’s only historical peer is Baron Haussmann, the planner appointed by Napoléon III to transform Paris from the warren of narrow cobbled streets in which radicals could foment revolution, into the city of wide boulevards we know today. (Among the advantages of said boulevards was that they afforded military units a clear line of fire in the event of trouble.)

Behind the film lie two great books, only one of which is mentioned in the script. The first — and obvious one — is Jacobs’s The Life and Death of Great American Cities, a scarifying attack on the modernist ideology of Le Corbusier and his disciples and a brilliant defence of the organic complexity of urban living, with all its chaos, untidiness and diversity. The other book is Robert Caro’s magnificent biography of Moses — The Power Broker: Robert Moses and the Fall of New York. This is a warts-and-all portrayal of a bureaucratic monster, but it also explains how Moses, like many a monster before him, started out as an earnest, idealistic, principled reformer — very much a Progressive.

He was educated at Yale, Oxford (Wadham) and Columbia, where he did a PhD. At Oxford he acquired an admiration for the British civil service, with its political neutrality and meritocratic ethos — but also a rigid distinction between an ‘Administrative’ class (those engaged in policy-making) and the bureaucratic drones (whose role was merely to implement policy). When he returned to the US, Moses found that the political machines of American cities had zero interest in meritocracy, and eventually realised that his path to success lay in hitching his wagon to a machine politician — New York governor Al Smith, a brilliant political operator with an eighth-grade education. Moses started by building public parks, but quickly acquired such a wide range of powers and authority that became was, effectively, the ‘Master Builder’ who remodelled the city of New York.

Because Tyrnauer’s film focusses more on Jacobs, Moses’s Progressive origins are understandably ignored and what comes over is only his monstrous, bullying side — exemplified in his contemptuous aside that “Those who can, build; those who can’t criticize”. In that sense, Moses is very like that other monster, Lyndon Johnson, to whose biography Caro has devoted most of his career.

My interest in Moses was first sparked by something written by a friend, Larry Lessig. In an essay that was a prelude to Code: And Other Laws of Cyberspace, his first major book, Lessig discussed the role of architecture in regulating behaviour and wrote:

“Robert Moses built highway bridges along the roads to the beaches in Long Island so that busses could not pass under the bridges, thereby assuring that only those with cars (mainly white people) would use certain public beaches, and that those without cars (largely African Americans) would be driven to use other beaches, so that social relations would be properly regulated.”

In later years, this assertion — about the effectiveness of the low underpasses as racial filters — has been challenged, but Lessig’s central proposition (that architecture constrains and determines behaviour) is amply demonstrated in the film. The squalid, slum-like conditions that Moses sought to demolish did indeed enable and determine behaviour: it was the rich, organic, chaotic, vibrant life that Jacobs observed and celebrated. And when those slums were replaced by the ‘projects’ beloved of Moses, le Corbusier et al — high-rise apartment blocks set in rational configurations — they also determined behaviour, mostly by eliminating vibrancy and life and creating urban wastelands of crime, social exclusion and desperation.

A second reflection sparked by the film is its evocation of the way the automobile destroyed cities. Moses believed that what was good for General Motors was good for America and he was determined to make New York automobile-friendly by building expressways that gouged their way through neighbourhoods and rendered them uninhabitable. His first defeat at Jacobs’s hands came when his plan to extend Fifth Avenue by running a road through Washington Square was overturned by inspired campaigning by women he derided as mere “housewives”. But in the end the defeat that really broke him was the failure of his plan to run a motorway through Manhattan.

A third reflection is, in a way, the inverse of that. The film provides a vivid illustration of the extent to which the automobile changed the shape not just of New York but of all American cities (and also positions in intercourse, as some wag once put it; American moralists in the 1920s used to fulminate that cars were “brothels on wheels”) But those vehicles were all driven by humans. If autonomous cars become the norm in urban areas, then the changes they will bring in due course could be equally revolutionary. After all, if mobility is what we really require, then car ownership will decline — as will demand for on- and off-street parking. Pavements (or sidewalks, as they call them in the US) can be made wider, enabling more of the street life that Jacobs so prized.

The final reflection on the film is gloomier. Towards the end, the camera began to pan, zoom and linger on the monstrous cities that are now being built in China, Asia and India — the areas of the world that are going to be more dominant in coming centuries. And as one looks at the resulting forests of high-rise, soulless towers it looks as though the ghosts of Le Corbusier and Moses have come back to earth and are gleefully setting about creating the dysfunctional slums of the future. Which reminds me of a passage in a book I’m currently reading — Edward Luce’s The Retreat of Western Liberalism. “History does not end”, Luce writes. “it is a timeless repetition of human folly and correction”.

Amen to that.

Bob Taylor RIP

Bob Taylor, the man who funded the Arpanet (the military precursor of the Internet), has died at the age of 85. He also funded much of Doug Engelbart’s ‘augmentation’ research at SRI. After Arpanet was up and running, Bob left to found the Computer Science Lab at Xerox PARC. His ambition for CSL, he said, was to hire the 50 best computer scientists and engineers in the US and let them do their stuff. He didn’t get 50, but he did get some real stars — including Bob Metcalfe, Chuck Thacker, David Boggs, Butler Lampson, Alan Kay and Charles Simonyi who — in three magical years — invented much of the technology we use today: bitmapped windowing interfaces, Ethernet and the laser printer, networked workstations, collaborative working, to name just a few. They were, in the words of one chronicler “dealers of lightning”. Bob’s management style was inspired. His philosophy was to hire the best and give them their heads. His job, he told his geeks, was to protect them from The Management. And he was as good as his word.

Xerox, needless to say, fumbled the future the company could have owned. Steve Jobs saw what Bob’s team were doing and realised its significance. He went back to Apple and started the Macintosh project to bring it to the masses.

Bob and I had a friend in common — Roger Needham, the great computer scientist, who worked with Bob after he had left PARC to run the DEC Systems Research Center in California. When Roger was diagnosed with terminal cancer his Cambridge colleagues organised a symposium and a festschrift in his honour. Bob and I co-wrote one of the essays in that collection. Its title — “Zen and the Art of Research Management” — captured both Bob’s and Roger’s management style.

The NYT obit is properly respectful of Bob’s great contribution to our world. One of the comments below it comes from Alan Kay who was one of the CSL stars. He writes:

Bob fully embraced the deeply romantic “ARPA Dream” of personal computing and pervasive networking. His true genius was in being able to “lead by getting others to lead and cooperate” via total commitment, enormous confidence in his highly selected researchers expressed in all directions, impish humor, and tenacious protection of the research. He was certainly the greatest “research manager” in his field, and because of this had the largest influence in a time of the greatest funding for computing research. It is impossible to overpraise his impact and to describe just how he used his considerable personality to catalyze actions.

The key idea was to have a great vision yet not try to drive it from the funders on down, but instead “fund people not projects” by getting the best scientists in the world to “find the problems to solve” that they thought would help realize the vision. An important part of how this funding was carried out was not just to find the best scientists, but to create them. Many of the most important researchers at Xerox PARC were young researchers in ARPA funded projects. Bob was one of the creators of this process and carried it out at ARPA, Xerox PARC, and DEC. He was one of those unique people who was a central factor in a deep revolution of ideas.

Yep: unique is the word. May he rest in peace.


Image courtesy of Palo Alto Research Center