The education of Mark Zuckerberg

This morning’s Observer column:

One of my favourite books is The Education of Henry Adams (published in 1918). It’s an extended meditation, written in old age by a scion of one of Boston’s elite families, on how the world had changed in his lifetime, and how his formal education had not prepared him for the events through which he had lived. This education had been grounded in the classics, history and literature, and had rendered him incapable, he said, of dealing with the impact of science and technology.

Re-reading Adams recently left me with the thought that there is now an opening for a similar book, The Education of Mark Zuckerberg. It would have an analogous theme, namely how the hero’s education rendered him incapable of understanding the world into which he was born. For although he was supposed to be majoring in psychology at Harvard, the young Zuckerberg mostly took computer science classes until he started Facebook and dropped out. And it turns out that this half-baked education has left him bewildered and rudderless in a culturally complex and politically polarised world…

Read on

Sixty years on

Today is the 60th anniversary of the day that the Soviet Union announced that it had launched a satellite — Sputnik — in earth orbit. The conventional historical narrative (as recounted, for example, in my book and in Katie Hafner and Matthew Lyon’s history) is that this event really alarmed the American public, not least because it suggested that the Soviet Union might be superior to the US in important fields like rocketry and ballistic missiles. The narrative goes on to recount that the shock resulted in a major shake-up in the US government which — among other things — led to the setting up of ARPA — the Advanced Research Projects Agency — in the Pentagon. This was the organisation which funded the development of ARPANET, the packet-switched network that was the precursor of the Internet.

The narrative is accurate in that Sputnik clearly provided the impetus for a drive to produce a massive increase in US capability in science, aerospace technology and computing. But the declassification of a trove of hitherto-secret CIA documents (for example, this one) to mark the anniversary suggests that the CIA was pretty well-informed about Soviet capabilities and intentions and that the launch of a satellite was expected, though nobody could guess at the timing. So President Eisenhower and the US government were not as shocked as the public, and they clearly worked on the principle that one should never waste a good crisis.

Lessons of history?

This morning’s Observer column:

The abiding problem with writing about digital technology is how to avoid what the sociologist Michael Mann calls “the sociology of the last five minutes”. There’s something about the technology that reduces our collective attention span to that of newts. This is how we wind up obsessing over the next iPhone, the travails of Uber, Facebook being weaponised by Russia, Samsung’s new non-combustible smartphone and so on. It’s mostly a breathless search for what Michael Lewis once called “the new new thing”.

We have become mesmerised by digital technology and by the companies that control and exploit it. Accordingly, we find it genuinely difficult to judge whether a particular development is really something new and unprecedented or just a contemporary variant on something that is much older…

Read on

Citizen Jane

Last night we went to see Matt Tyrnauer’s documentary about the legendary battle between the journalist, author and activist Jane Jacobs and the famous (or infamous, depending on your point of view) New York city planner, Robert Moses.

In the scale of his ambition to impose modernist order on the chaos of the city, Moses’s only historical peer is Baron Haussmann, the planner appointed by Napoléon III to transform Paris from the warren of narrow cobbled streets in which radicals could foment revolution, into the city of wide boulevards we know today. (Among the advantages of said boulevards was that they afforded military units a clear line of fire in the event of trouble.)

Behind the film lie two great books, only one of which is mentioned in the script. The first — and obvious one — is Jacobs’s The Life and Death of Great American Cities, a scarifying attack on the modernist ideology of Le Corbusier and his disciples and a brilliant defence of the organic complexity of urban living, with all its chaos, untidiness and diversity. The other book is Robert Caro’s magnificent biography of Moses — The Power Broker: Robert Moses and the Fall of New York. This is a warts-and-all portrayal of a bureaucratic monster, but it also explains how Moses, like many a monster before him, started out as an earnest, idealistic, principled reformer — very much a Progressive.

He was educated at Yale, Oxford (Wadham) and Columbia, where he did a PhD. At Oxford he acquired an admiration for the British civil service, with its political neutrality and meritocratic ethos — but also a rigid distinction between an ‘Administrative’ class (those engaged in policy-making) and the bureaucratic drones (whose role was merely to implement policy). When he returned to the US, Moses found that the political machines of American cities had zero interest in meritocracy, and eventually realised that his path to success lay in hitching his wagon to a machine politician — New York governor Al Smith, a brilliant political operator with an eighth-grade education. Moses started by building public parks, but quickly acquired such a wide range of powers and authority that became was, effectively, the ‘Master Builder’ who remodelled the city of New York.

Because Tyrnauer’s film focusses more on Jacobs, Moses’s Progressive origins are understandably ignored and what comes over is only his monstrous, bullying side — exemplified in his contemptuous aside that “Those who can, build; those who can’t criticize”. In that sense, Moses is very like that other monster, Lyndon Johnson, to whose biography Caro has devoted most of his career.

My interest in Moses was first sparked by something written by a friend, Larry Lessig. In an essay that was a prelude to Code: And Other Laws of Cyberspace, his first major book, Lessig discussed the role of architecture in regulating behaviour and wrote:

“Robert Moses built highway bridges along the roads to the beaches in Long Island so that busses could not pass under the bridges, thereby assuring that only those with cars (mainly white people) would use certain public beaches, and that those without cars (largely African Americans) would be driven to use other beaches, so that social relations would be properly regulated.”

In later years, this assertion — about the effectiveness of the low underpasses as racial filters — has been challenged, but Lessig’s central proposition (that architecture constrains and determines behaviour) is amply demonstrated in the film. The squalid, slum-like conditions that Moses sought to demolish did indeed enable and determine behaviour: it was the rich, organic, chaotic, vibrant life that Jacobs observed and celebrated. And when those slums were replaced by the ‘projects’ beloved of Moses, le Corbusier et al — high-rise apartment blocks set in rational configurations — they also determined behaviour, mostly by eliminating vibrancy and life and creating urban wastelands of crime, social exclusion and desperation.

A second reflection sparked by the film is its evocation of the way the automobile destroyed cities. Moses believed that what was good for General Motors was good for America and he was determined to make New York automobile-friendly by building expressways that gouged their way through neighbourhoods and rendered them uninhabitable. His first defeat at Jacobs’s hands came when his plan to extend Fifth Avenue by running a road through Washington Square was overturned by inspired campaigning by women he derided as mere “housewives”. But in the end the defeat that really broke him was the failure of his plan to run a motorway through Manhattan.

A third reflection is, in a way, the inverse of that. The film provides a vivid illustration of the extent to which the automobile changed the shape not just of New York but of all American cities (and also positions in intercourse, as some wag once put it; American moralists in the 1920s used to fulminate that cars were “brothels on wheels”) But those vehicles were all driven by humans. If autonomous cars become the norm in urban areas, then the changes they will bring in due course could be equally revolutionary. After all, if mobility is what we really require, then car ownership will decline — as will demand for on- and off-street parking. Pavements (or sidewalks, as they call them in the US) can be made wider, enabling more of the street life that Jacobs so prized.

The final reflection on the film is gloomier. Towards the end, the camera began to pan, zoom and linger on the monstrous cities that are now being built in China, Asia and India — the areas of the world that are going to be more dominant in coming centuries. And as one looks at the resulting forests of high-rise, soulless towers it looks as though the ghosts of Le Corbusier and Moses have come back to earth and are gleefully setting about creating the dysfunctional slums of the future. Which reminds me of a passage in a book I’m currently reading — Edward Luce’s The Retreat of Western Liberalism. “History does not end”, Luce writes. “it is a timeless repetition of human folly and correction”.

Amen to that.

Bob Taylor RIP

Bob Taylor, the man who funded the Arpanet (the military precursor of the Internet), has died at the age of 85. He also funded much of Doug Engelbart’s ‘augmentation’ research at SRI. After Arpanet was up and running, Bob left to found the Computer Science Lab at Xerox PARC. His ambition for CSL, he said, was to hire the 50 best computer scientists and engineers in the US and let them do their stuff. He didn’t get 50, but he did get some real stars — including Bob Metcalfe, Chuck Thacker, David Boggs, Butler Lampson, Alan Kay and Charles Simonyi who — in three magical years — invented much of the technology we use today: bitmapped windowing interfaces, Ethernet and the laser printer, networked workstations, collaborative working, to name just a few. They were, in the words of one chronicler “dealers of lightning”. Bob’s management style was inspired. His philosophy was to hire the best and give them their heads. His job, he told his geeks, was to protect them from The Management. And he was as good as his word.

Xerox, needless to say, fumbled the future the company could have owned. Steve Jobs saw what Bob’s team were doing and realised its significance. He went back to Apple and started the Macintosh project to bring it to the masses.

Bob and I had a friend in common — Roger Needham, the great computer scientist, who worked with Bob after he had left PARC to run the DEC Systems Research Center in California. When Roger was diagnosed with terminal cancer his Cambridge colleagues organised a symposium and a festschrift in his honour. Bob and I co-wrote one of the essays in that collection. Its title — “Zen and the Art of Research Management” — captured both Bob’s and Roger’s management style.

The NYT obit is properly respectful of Bob’s great contribution to our world. One of the comments below it comes from Alan Kay who was one of the CSL stars. He writes:

Bob fully embraced the deeply romantic “ARPA Dream” of personal computing and pervasive networking. His true genius was in being able to “lead by getting others to lead and cooperate” via total commitment, enormous confidence in his highly selected researchers expressed in all directions, impish humor, and tenacious protection of the research. He was certainly the greatest “research manager” in his field, and because of this had the largest influence in a time of the greatest funding for computing research. It is impossible to overpraise his impact and to describe just how he used his considerable personality to catalyze actions.

The key idea was to have a great vision yet not try to drive it from the funders on down, but instead “fund people not projects” by getting the best scientists in the world to “find the problems to solve” that they thought would help realize the vision. An important part of how this funding was carried out was not just to find the best scientists, but to create them. Many of the most important researchers at Xerox PARC were young researchers in ARPA funded projects. Bob was one of the creators of this process and carried it out at ARPA, Xerox PARC, and DEC. He was one of those unique people who was a central factor in a deep revolution of ideas.

Yep: unique is the word. May he rest in peace.


Image courtesy of Palo Alto Research Center

On the sociology of the last five minutes…

… or what Adam Gopnik calls “presentism” in his review article on books by Pankaj Misha, Joel Mokyr and Yuval Noah Harari.

Of all the prejudices of pundits, presentism is the strongest. It is the assumption that what is happening now is going to keep on happening, without anything happening to stop it. If the West has broken down the Berlin Wall and McDonald’s opens in St. Petersburg, then history is over and Thomas Friedman is content. If, by a margin so small that in a voice vote you would have no idea who won, Brexit happens; or if, by a trick of an antique electoral system designed to give country people more power than city people, a Donald Trump is elected, then pluralist constitutional democracy is finished. The liberal millennium was upon us as the year 2000 dawned; fifteen years later, the autocratic apocalypse is at hand. Thomas Friedman is concerned.

You would think that people who think for a living would pause and reflect that whatever is happening usually does stop happening, and something else happens in its place; a baby who is crying now will stop crying sooner or later. Exhaustion, or a change of mood, or a passing sound, or a bright light, something, always happens next. But for the parents the wait can feel the same as forever, and for many pundits, too, now is the only time worth knowing, for now is when the baby is crying and now is when they’re selling your books.

And so the death-of-liberalism tomes and eulogies are having their day, with the publishers who bet on apocalypse rubbing their hands with pleasure and the ones who gambled on more of the same weeping like, well, babies.

Characteristically good piece by one of the New Yorker‘s best writers. He’s not overly impressed by Mishra or Harari, and prefers Mokyr’s less grandiose interest in people who make and do things as the real movers of history.

The real word-processor

mcwilliams-word-processor-677x1024

Matthew Kirschenbaum has written a fascinating bookTrack Changes: a Literary History of Word Processing — and in following a link to interviews with him I came on this lovely image, which made me laugh out loud.

Also: word processor seemed such a strange term for a tool designed (presumably) to aid composition. I always thought of it alongside the food processor that became a staple in so many modern kitchens (though never in ours), the whole point of which was to reduce everything to an undifferentiated pulp. (Or so I thought, anyway, never having used one.)

Kirschenbaum clears up the mystery: it seems that the ‘processor’ term came from IBM, who were marketing an office document-processing system which envisaged a process which took the document from initial outline to finished printed version to filed-away copy.

Remember, remember the ninth of September

scripting_dot_com_9_11

For personal reasons I have vivid memories of 9/11, so today is always a sombre day in my calendar. But I was suddenly reminded this morning of how some of my Internet buddies rose magnificently to the challenge of the day. This is Dave Winer’s Scripting.com blog, for example. And here are Jeff Jarvis’s audio reports, as unforgettable now as they were then.

And then this memoir by the WSJ‘s John Bussey.

The long history of ‘cyber’

My Observer piece on Thomas Rid’s alternative history of computing, The Rise of the Machines: the Lost History of Cybernetics:

Where did the “cyber” in “cyberspace” come from? Most people, when asked, will probably credit William Gibson, who famously introduced the term in his celebrated 1984 novel, Neuromancer. It came to him while watching some kids play early video games. Searching for a name for the virtual space in which they seemed immersed, he wrote “cyberspace” in his notepad. “As I stared at it in red Sharpie on a yellow legal pad,” he later recalled, “my whole delight was that it meant absolutely nothing.”

How wrong can you be? Cyberspace turned out to be the space that somehow morphed into the networked world we now inhabit, and which might ultimately prove our undoing by making us totally dependent on a system that is both unfathomably complex and fundamentally insecure. But the cyber- prefix actually goes back a long way before Gibson – to the late 1940s and Norbert Wiener’s book, Cybernetics, Or Control and Communication in the Animal and the Machine, which was published in 1948.

Cybernetics was the term Wiener, an MIT mathematician and polymath, coined for the scientific study of feedback control and communication in animals and machines. As a “transdiscipline” that cuts across traditional fields such as physics, chemistry and biology, cybernetics had a brief and largely unsuccessful existence: few of the world’s universities now have departments of cybernetics. But as Thomas Rid’s absorbing new book, The Rise of the Machines: The Lost History of Cybernetics shows, it has had a long afterglow as a source of mythic inspiration that endures to the present day…

Read on

Why the arrival, not the journey, matters

I have an article on the evolution of the Internet in a new journal — the Journal of Cyber Policy. I was asked to give a talk at the launch last week in Chatham House, home of the Royal Institute of International Affairs in London. Here’s my text.


One of my favourite autobiographies is that of Leonard Woolf, the saintly husband of Virginia. It’s a multi-volume work, but my favourite one is the volume covering the years 1939-1969. It’s entitled The Journey, Not the Arrival, Matters and it came to mind when I was pondering this talk, because in the case of the evolution of digital technology I think it’s the other way round: the arrival, not the journey, matters. And I’d like to explain why.

In 1999, Andy Grove, then the Chief Executive of the chip manufacturer Intel, said something interesting. “In five years’s time”, he declared, “companies that aren’t Internet companies won’t be companies at all”. He was speaking at the peak of the first Internet boom, when irrational exuberance ruled the world, but even so many people though he was nuts. Was the CEO of Intel really saying that all companies needed to be selling information goods by 2004?

In fact, Grove was being characteristically perceptive. What he understood — way back in 1999 — was that the Internet was on its way to becoming a General Purpose Technology or GPT, like mains electricity, and that every organisation in the world would have to adapt to that reality. So on the big story, Andy was right; he was just a bit optimistic on the timing front.

My article in the first issue of the new journal is entitled “The evolution of the Internet”, but the real meat is in the subtitle: “From military experiment to General Purpose Technology”. I say that because as the network has been evolving we have focussed too much on one aspect of its development and impact — namely the production, consumption and exchange of information goods — and too little on the direction of travel, which — as my subtitle implies — is towards becoming a GPT.

Arthur C Clarke is famous for saying that any sufficiently advanced technology is indistinguishable from magic, and for most of its users the Internet already meets that requirement. As Eric Schmidt, Google’s Chairman, once observed, it is the first technology that humans have built that humans do not understand. But while a General Purpose Technology may or may not be incomprehensible to humans, it has impacts which are visible to everyone

This is because GPTs have an impact on the world way beyond the domain in which they first appeared. They are technologies that can affect an entire economy and “have the potential to drastically alter societies through their impact on pre-existing economic and social structures”. Think steam engine, electricity, electronics, the automobile. GPTs have “the potential to reshape the economy and boost productivity across all sectors and industries, like electricity or the automobile”. And these transformations are about far more than simple technical innovation, because they often require the wholesale remaking of infrastructure, business models, and cultural norms. GPTs are the motive forces behind Joseph Schumpeter’s waves of ‘creative destruction’ and in that sense leave almost nothing untouched.

But if, as now seems obvious, the Internet is a GPT, then our societies are only at the beginning of a journey of adaptation, not the end. And this may surprise some people because the Internet is actually rather old technology. How you compute its age depends really on where you define its origins. But if you think — as I do — that it starts with Paul Baran’s concept of a packet-switched mesh in the early 1960s, then it’s now in its mid-fifties.

So you’d have thought that our society would have figured out the significance of the network by now. Sadly, not. And that’s not because we’re short of information and data about it. On the contrary, we are awash with the stuff. Our problem is that we don’t, as a culture, seem to understand it. We remain in that blissful state that Manuel Castells calls “informed Bewilderment”. So a powerful force is loose in our societies and we don’t really understand it. Why is that?

One good reason is that digital technology is incomprehensible to ordinary human beings. In that sense, it’s very different from some GPTs of the past. You didn’t have to be a rocket scientist to understand steam power, for example. You might not know much about Boyle’s Law, but you could readily appreciate that steam could powerfully augment animal muscle power and dramatically speed up travel. But most people have very little idea of what digital technology can — and potentially could — do. And this is getting worse, not better, as encryption, machine-learning and other arcane technologies become commonplace.

Another reason for our bewilderment is that digital technology has some distinctive properties — the posh term for them is ‘affordances’ — that make it very different from the GPTs of the past. Among these affordances are:

  • Zero (or near-zero) marginal costs;
  • Very powerful network effects;
  • The dominance of Power Law statistical distributions (which tend towards winner-takes-all outcomes);
  • Technological lock-in (where a proprietary technology becomes the de-facto technical standard for an entire industry);
  • Intrinsic facilitation of exceedingly fine-grained surveillance; low entry thresholds (which facilitates what some scholars call “permissionless innovation”);
  • A development process characterised by ‘combinatorial innovation’ which can lead to sudden and unexpected new capabilities, and an acceleration in the pace of change and development;
  • And the fact that the ‘material’ that is processed by the technology is information — which is, among other things, the lifeblood of social and cultural life, not to mention of democracy itself.

These affordances make digital technology very different from the GPTs of the past. They’re what led me once, when seeking a pithy summary of the Internet for a lay audience, to describe it as “a global machine for springing surprises”. Many of these surprises have been relatively pleasant — for example the World Wide Web; VoIP (internet telephony); powerful search engines; Wikipedia; social networking services; digital maps. Others have been controversial — for example the file-sharing technologies that overwhelmed the music industry; or the belated discovery (courtesy of Edward Snowden) of the pervasive surveillance enabled by the technology and exploited by governments and corporations. And some surprises — particularly the capabilities for cybercrime, espionage, IP and identity theft, malware, blackmail, harassment, and information warfare — have been worrying and, in some cases, terrifying.

But maybe another reason why we are taken aback by the rise of the Internet is because we have been so dazzled by the technology that we have been infected by the technological determinism that is the prevailing ideology in the reality distortion field known as Silicon Valley. The folks there really do believe that technology drives history, which is why their totemic figures like Marc Andreessen — the guy who co-authored Mosaic, the first proper web browser, and now heads a leading venture capital firm — can utter infantile mantras like “software is eating the world” and not get laughed off the stage.

But technology is only one of the forces that drives history because it doesn’t exist — or come into being — in a vacuum. It exists in a social, cultural, political, economic and ideological context, and it is the resultant of these multifarious forces that determines the direction of travel. So in trying to understand the evolution of the Internet, we need to take these other forces into account.

As far as the Internet is concerned, for example, the things to remember are that, first of all, it was a child of the Cold War; that in its early manifestations it was influenced by a social ethos which had distinct counter-cultural overtones; and that it was only relatively late in its development that it was taken over by the corporate interests and intelligence concerns which now dominate it.

Oh — and I almost forgot — there is that enormous elephant in the room, namely that it was almost entirely an American creation, which perhaps explains why all the world’s major Internet companies — outside of China — are US corporations and thus powerful projectors of American ‘soft power’, a fact which — coincidentally — might help to explain current European fears about these companies.

Just for the avoidance of doubt, though, this is not a rant about American dominance. My personal opinion is that US stewardship of the Internet was largely benign for much of the network’s early history. But such stewardship was only acceptable for as long as the Internet was essentially confined to Western industrialised nations. Once the network became truly global, US dominance was always likely to be challenged. And so it has proved.

Another problem with focussing only on the evolution of the network only in terms of technology is that it leads, inevitably, to a Whig Interpretation of its history — that is to say, a record of inexorable progress. And yet anyone who has ever been involved in such things knows that it’s never like that.

With hindsight, for example, we see packet-switching — the fundamental technology of the network — as an obvious and necessary concept. But, as Janet Abbatte has pointed out in her illuminating history, it wasn’t like that at all. In 1960 packet-switching was an experimental, even controversial, idea; it was very difficult to implement initially and some communications experts (mostly working for AT&T) argued that it would never work at all. With the 20/20 vision of hindsight, these sceptics look foolish. But that’s always the problem with hindsight. At the time, the scepticism of these engineers was so vehement that it led Paul Baran to withdraw his proposal to build an experimental prototype of a packet-switched network, thereby delaying the start of the project by the best part of a decade.

Focussing exclusively on the technology creates other blind spots too. For example, it renders us insensitive to the extent to which the Internet — like all major technologies — was socially constructed. This is how, for example, surveillance became “the business model of the Internet” — as the security expert Bruce Schneier once put it. In this case the root cause was the interaction between a key affordance of the technology — the power of network effects — and Internet users’ pathological reluctance to pay for online services. Since the way to succeed commercially was to “get big fast” and since the quickest way to do that was to offer ‘free’ services, the business model that emerged was one in which users’ personal data and their data-trails were harvested and auctioned to advertisers and ad-brokers.

Thus was born a completely new kind of industrial activity — dubbed “surveillance capitalism” by the Harvard scholar Shosana Zuboff — in which extractive corporations like Google and Facebook mine user data which can then be ‘refined’ (i.e. analysed) and sold to others for targeted advertising and other purposes. Although this kind of spying is technologically easy to implement, it could not have become the basis of huge industrial empires without user consent, or without legal arrangements which discourage assignation of ownership of distributed personal data.

One of the most noticeable things about our public discourse on the Internet is how a-historical it is. This is partly a reflection of the way the tech media work — most journalists who cover the industry are essentially perpetually engaged in “the sociology of the last five minutes,” chasing what Michael Lewis memorably described as The New New Thing. As a result, the underlying seismic shifts caused by the technology seem to go largely unnoticed or misunderstood by the public. Yet when we look back at the story so far, we can spot significant discontinuities.

One such, for example, was the appearance of Craigslist in 1996. It was a website providing free, localised classified advertising which started first in San Francisco and gradually spread to cover cities in 70 countries. For a surprisingly long time, the newspaper industry remained blissfully unaware of its significance. But if journalists had understood their industry better they would have seen the threat clearly.

For newspapers are value chains which link an expensive and loss-making activity called journalism with a profitable activity called classified advertising. But one of the affordances of the Internet is that it dissolves value chains, picking off the profitable bits that it can do better than conventional operations. And classified advertising turned out to be one of the things that the internet could do very well: instead of having to wade through acres of small print looking for that used car of your dreams, you simply typed your requirements into a search engine and Bingo! — there were the results. The end result was that newspapers were left holding only the unprofitable, loss-making, part of their value chains.

“The peace of God,” says the Bible, “passeth all understanding”. So too do the valuations of Internet companies. We saw that in the first Internet boom of 1995-2000 — that extraordinary outbreak of what the economist Robert Schiller dubbed “Irrational Exuberance” and which was later christened the “dot-com bubble”. What fuelled the mania was speculative interest in the stock-market valuation of the multitude of Web-based companies (‘dot-coms’) which materialised following Netscape’s IPO in 1995 and which was amplified by the fantasies of fund managers, stock analysts, journalists and pundits. As one sceptical observer put it, what really happened is that “Wall Street moved West”.

The core business model of these fledgling companies was the idea of harnessing the network effects implicit in the rapid growth of consumer interest in the Internet to obtain a dominant market share in a range of sectors. At the height of the frenzy, dot-com companies with few customers, few (sometimes no) revenues and handfuls of employees briefly enjoyed stock-market valuations greater than those of huge companies like General Motors.

The boom followed the traditional pattern of speculative manias through the centuries, and eventually, in March 2000, it burst. In just over a month the total market capitalisation of companies on the NASDAQ exchange fell from $6.71 trillion to $5.78 trillion. In other words, nearly a trillion dollars in value had been obliterated. And less than half of the dot-coms founded in the boom survived the crash.

But here’s the strange thing: the bubble created much of the technological infrastructure necessary to hasten the maturing of the network. When the mania began, some canny observers quoted the old maxim of the Californian gold rush of the 1850s – that the people who made most money in California were not the miners and prospectors, but the merchants who sold them pickaxes and shovels. The modern embodiments of those merchants were the telecommunications companies which in the 1990s invested heavily in building large fibre-optic cable networks and server farms to service the ‘new’ economy that was apparently coming into being. When the bubble burst, these companies were left with apparently unwanted assets, and some went bankrupt. But the infrastructure that they had built remained, and turned out to be critical for enabling what came next.

The interesting thing is that — to those who know their economic history — this is an old story. Brad DeLong points out, for example, that the ‘railway mania’ of the 19th century lost investors a lot of money, but the extensiveness of the railway network that was the product of the frenzy enabled completely new industries to be built. It was the completion of the railway network, for example, that enabled the rise of the mail-order industry — which for two generations was a licence to print money in the United States.

Similarly with the Internet. While the bubble caused a financial crash, it also resulted in a massive expansion in the communications infrastructure needed to turn the network into a ubiquitous public utility — a General Purpose Technology — much as happened with railway networks in the late 19th century. So now the internet is mature and extensive enough to serve as a foundation on which new kinds of innovation – much of it in areas apparently unrelated to information goods – can be built. In that context, it’s conceivable that enterprises like the cab-hailing application Uber, or the room-hiring service Airbnb may turn out to be the contemporary equivalent of the mail-order services of the 19th century: unthinkable before the technology and unremarkable afterwards.

We’ve taken a long time to get here, but we’ve made it. Now all we have to do is figure out how to deal with it. Which is why I say that the arrival, not the journey, matters.

Thank you.