Wednesday 24 June, 2020

Facebook runs into a German wall

From the FT — probably paywalled.

Facebook suffered a setback in Germany on Tuesday after the country’s highest civil court ruled that it must comply with an order from the German antitrust watchdog and fundamentally change the way it handles users’ data.

The ruling by the federal court of justice in Karlsruhe takes aim at the way Facebook merges data from the group’s own services, such as WhatsApp and Instagram, with other data collected on third-party internet sites via its business tools.

In 2019, Germany’s cartel office blocked Facebook from pooling such data without user consent. Facebook later won a suspension of that decision from a court in Düsseldorf and wanted the pause to continue until a ruling on its appeal.

But on Tuesday the Karlsruhe court set aside the Düsseldorf ruling and backed the antitrust authorities, saying Facebook in future had to offer its users a choice when it collects and merges data from websites outside of its own ecosystem.

Interesting. Andreas Mundt, head of the German cartel office, is a determined and imaginative official. In a statement, he welcomed the decision. He said data was a decisive factor for economic power and for judging market power on the internet. “Today’s ruling gives us important clues as to how we should deal with the issues of data and competition,” he said, in comments quoted by DPA agency.

Progress, at last.

Mark Zuckerberg Believes Only in Mark Zuckerberg

Why is he abetting Trump while civil rights leaders and his own employees rebuke him? It’s about dominance.

At last, people are beginning to suss what it is about Zuckerberg that’s so weird. I’ve thought for years — on the basis of reading his public posts and watching his occasional (rare) public appearances — that he is fundamentally an autocratic sociopath. But because he’s so rich, the usual aphrodisiac effect of great wealth kicks in and journalists (and others) who should know better succumb to the idea that if he is so rich then he must be so smart. Well, he is smart. But he ain’t interested in other people, or capable of emphathising with them..

The autocratic bit is easy to document btw. You only have to look at the relevant paragraph in Facebook’s SEC filings.

Here it is (on page 25 of the filing

Siva Viadhyanathan has also been thinking about Zuckerberg for a long time and has now written an interesting essay on what he has finally concluded. He used to think of Zuckerberg, he says, as an idealist brought up in a bubble and so was puzzled by some of the things he allowed to happen (because, remember, he has absolute power over that company of his.) A key factor in Siva’s change of mind seems to have been Steven Levy’s book, Facebook: The Inside Story.

I expected that Zuckerberg was experiencing cognitive dissonance while watching his dear company be exploited to empower genocidal forces in Myanmar, religious terrorists in Sri Lanka, or vaccine deniers around the world.

I was wrong. I misjudged Zuckerberg. Another thing I learned from Levy’s book is that along with an idealistic and naive account of human communication, Zuckerberg seems to love power more than he loves money or the potential to do good in the world.

Having studied just enough Latin in prep school to get him in trouble, Zuckerberg was known to quote Cato, shouting “Carthago delenda est” (Carthage must be destroyed) when referring to Google. Emperor Augustus was a particular inspiration, Levy reports, and Zuckerberg named his child after Augustus, the adopted son of the tyrant Julius Caesar who ruled over the greatest and most peaceful span of the Roman Empire as its first emperor.

It was not Zuckerberg suffering from cognitive dissonance. I was. As I watched him cooly face questions from congressional representatives about the Cambridge Analytica debacle, he never seemed thoughtful, just disciplined.

That Facebook could serve people well—and it does—and that it could be abused to contribute to massive harm, pain, and death, didn’t seem to generate that one troublesome phenomenon that challenges the thoughtful: Contradiction.

Zuckerberg continued and continues to believe in the positive power of Facebook, but that’s because he believes in the raw power of Facebook. “Domination!,” he used to yell at staff meetings, indicating that everything is a game. Games can be won. He must win. If a few million bones get broken along the way, his game plan would still serve the greatest good for the greatest number.

He believes in himself so completely, his vision of how the world works so completely, that he is immune to cognitive dissonance. He is immune to new evidence or argument. It turns out megalomaniacs don’t suffer from cognitive dissonance.

Like the notorious architect Philip Johnson, or Robert Moses, the tyrannical planner of New York, Zuckerberg, says Siva,

is a social engineer. He knows what’s best for us. And he believes that what’s best for Facebook is best for us. In the long run, he believes, Facebook’s domination will redeem him by making our lives better. We just have to surrender and let it all work out. Zuckerberg can entertain local magistrates like Trump because Zuckerberg remains emperor.

Nice, perceptive essay by a formidable scholar.

Are Universities Going the Way of CDs and Cable TV?

Although it probably seems inconceivable to those of us who work in universities, the shock of the pandemic will lead to radical changes in the way most of these institutions work. This essay is interesting because it’s by Michael Smith, who is Professor of Information Technology and Marketing at Carnegie Mellon and the co-author of Streaming, Sharing, Stealing: Big Data and the Future of Entertainment.

He starts with a question the Wall Street Journal asked in April:

Do students think their pricey degrees are worth the cost when delivered remotely? “One student responded with this zinger, Smith writes,

“Would you pay $75,000 for front-row seats to a Beyoncé concert and be satisfied with a livestream instead?” Another compared higher education to premium cable—an annoyingly expensive bundle with more options than most people need. “Give me the basic package,” he said.

“As a parent of a college-age child”, Smith continues, “I’m sympathetic to these concerns. But as a college professor, I find them terrifying. And invigorating”.

Why terrifying?

Because I study how new technologies cause power shifts in industries, and I fear that the changes in store for higher education are going to look a lot like the painful changes we’ve seen in retail, travel, news, and entertainment.

Consider the entertainment industry.

Throughout the 20th century, the industry remained remarkably stable, despite technological innovations that regularly altered the ways movies, television, music, and books were created, distributed, and consumed. That stability, however, bred overconfidence, overpricing, and an overreliance on business models tailored to a physical world.

Trouble arrived early in the 21st century, when upstart companies powered by new digital technologies began to challenge the status quo. Entertainment executives reflexively dismissed the threat. Netflix was “a channel, not an alternative.” Amazon Studios was “in way over their heads.” YouTube? No self-respecting artist would ever use a DIY platform to start a career. In 1997, after one music executive heard songs compressed into the MP3 format, he refused to believe anybody would give up the sound quality of CDs for the portability of MP3s. “No one is going to listen to that shit,” he insisted. In 2013, the COO of Fox expressed similar skepticism about the impact of technological change on his business. “People will give up food and a roof over their head,” he told investors, “before they give up TV.”

We all know how that worked out: From 1999 to 2009, the music industry lost 50 percent of its sales. From 2014 to 2019, roughly 16 million American households canceled their cable subscriptions.

I remember this in the broadcasting business. In the mid- to late-1990s I was a consultant to a firm in the radio business. I spent many fruitless hours trying to explain to them the significance of streaming media, but they couldn’t get it. Where would all those servers come from? And what about the absence of broadband connections? And so on. The iPlayer and Video on Demand — and podcasting — were unimaginable then, even though they were emerging in embryonic form. (Anyone remember RealAudio?)

Similar dynamics are at play in higher education today, says Smith. Universities have long been remarkably stable institutions. But,

That stability has again bred overconfidence, overpricing, and an overreliance on business models tailored to a physical world. Like those entertainment executives, many of us in higher education dismiss the threats that digital technologies pose to the way we work. We diminish online-learning and credentialing platforms such as Khan Academy, Kaggle, and edX as poor substitutes for the “real thing.” We can’t imagine that “our” students would ever want to take a DIY approach to their education instead of paying us for the privilege of learning in our hallowed halls. We can’t imagine “our” employers hiring someone who doesn’t have one of our respected degrees.

But we’re going to have to start thinking differently…

Good essay. Worth reading in full if you work in Higher Ed. And the funniest thing of all is that Eli Noam published his amazingly far-sighted essay, “Electronics and the Dim Future of the University” in 1995! But it seems that no Vice-Chancellors or university Presidents read it! I did, though, because I was then teaching at the Open University, and of course we got it — but I guess that was probably because the OU was emphatically NOT a traditional university. We had no stake in the old system.

Oh, and if you haven’t been keeping up with how MOOCs have evolved, here’s a good example from Princeton.

Quarantine diary — Day 95


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!

Tuesday 28 January, 2020

Remembering Seamus Heaney

We were in Dublin last weekend and were accordingly able to visit the celebration of Seamus Heaney organised by the National Library of Ireland, to which he had donated his papers before he died. (He piled them all into his car and drove them to the Library.) It’s in a wing of the wonderful old Irish Parliament building opposite Trinity College, and is an inspired piece of curation by Geraldine Higgins. It has four themes: Excavations, covering Heaney’s early work; Creativity, looking at how he worked; Conscience on how he wrestled with what it meant to be a poet from Northern Ireland during the thirty years of violent conflict now known as ‘the Troubles’; and Marvels about his later work exploring some of the ways in which he circled back to the relationships of his childhood and youth.

It’s a wonderful exhibition which brings out the imaginative genius of a great poet. What was striking about Heaney (or ‘Famous Seamus’ as my countrymen dubbed him in an unnecessary attempt to stop him getting a big head) was the way he managed to be both earthed and sublime. His poetry is accessible to everyone and moving; and yet he was also a real scholarly heavyweight — a translator of Virgil and of Beowulf, for example, as well as a holder of professorial chairs at Harvard and Oxford. (Evidence: his Nobel lecture or his Oxford lectures, The Redress of Poetry.)

He’s my favourite poet, and I spent some of the exhibition close to tears while at the same time marvelling at the arc of his life. It was lovely to see marked-up drafts of his poems at various stages in their composition. And he had such nice handwriting — light and crystal clear. Also (something dear to my heart) he always wrote with a fountain pen.

The nicest things came at the end: readings of his unconventional love poem to his wife Marie after they were married. His final text message to her just before he died: noli timere (don’t be afraid). And a recording of him reading my favourite poem, Postscript, celebrating the magical coastline of the Burren in Co Clare.

And some time make the time to drive out west

Into County Clare, along the Flaggy Shore,

In September or October, when the wind

And the light are working off each other

So that the ocean on one side is wild

With foam and glitter, and inland among stones

The surface of a slate-grey lake is lit

By the earthed lightening of flock of swans,

Their feathers roughed and ruffling, white on white,

Their fully-grown headstrong-looking heads

Tucked or cresting or busy underwater.

Useless to think you’ll park or capture it

More thoroughly. You are neither here nor there,

A hurry through which known and strange things pass

As big soft buffetings come at the car sideways

And catch the heart off guard and blow it open.

Unmissable. And it’ll run for three years.

Quote of the Day

“Don’t attribute to stupidity what can be explained by incentives”

  • Mike Elias

RIP Clayton Christensen, who coined the term ‘disruptive innovation’.

Kim Lyons wrote a nice obituary in The Verge.

Scores of notable tech leaders have for years cited Christensen’s 1997 book The Innovator’s Dilemma as a major influence. It’s the only business book on the late Steve Jobs’ must-read list; Netflix CEO Reed Hastings read it with his executive team when he was developing the idea for his company; and the late Andy Grove, CEO of Intel, said the book and Christensen’s theory were responsible for that company’s turnaround. After summoning Christensen to his office to explain why he thought Intel was going to get killed, Grove was able to grok what to do, Christensen recalled:

“They made the Celeron Processor. They blew Cyrix and AMD out of the water, and the Celeron became the highest-volume product in the company. The book came out in 1997, and the next year Grove gave the keynote at the annual conference for the Academy of Management. He holds up my book and basically says, “I don’t mean to be rude, but there’s nothing any of you have published that’s of use to me except this.”

Personally, I don’t think he ever recovered from Jill Lepore’s devastating critique of his theories in the New Yorker.

Facebook moves into global banking

This morning’s Observer column:

We’ve known for ages that somewhere in the bowels of Facebook people were beavering away designing a cryptocurrency. Various names were bandied about, including GlobalCoin and Facebook Coin. The latter led some people to conclude that it must be a joke. I mean to say, who would trust Facebook, of Cambridge Analytica fame, with their money?

Now it turns out that the rumours were true. Last week, Facebook unveiled its crypto plans in a white paper. It’s called Libra and it is a cryptocurrency, that is to say, “a digital asset designed to work as a medium of exchange that uses strong cryptography to secure financial transactions, control the creation of additional units and verify the transfer of assets”.

Like bitcoin, then? Er, not exactly…

Read on

LATER Merryn Somerset Webb of the Financial Times had a really good column ($) about the Facebook venture. Among the points she raises are:

  • Real cryptocurrencies are about privacy and freedom. They are decentralised and permissionless — no one runs them, no one can be prevented from using them and the system never needs reference to a central authority. (This last assertion is dubious — see Vili Lehdonvirta’s Turing Institute talk — but we will leave that pass for now.) Libra is to be none of these wonderful things. It is to be run by a single organisation based in Switzerland. It is centralised and permissioned — and its value will not depend on anything intrinsic to it but to a basket of fiat currencies.

  • The interest from the deposits and government bonds that back Libra will not go to the people holding the currency. It will be used to pay for the system’s operating costs and, once those are covered, to the founding members as dividends.

  • There are real privacy concerns raised by Libra, especially in relation to Facebook’s role in it in relation to the metadata that Libra will throw up. “If you are worried about the way financial apps might use data on your spending patterns, you should be really worried about how a vast social network morphing into a financial network might use it. Anyone with your social media data can guess what you might buy. Anyone with your financial data knows already.”

  • If Libra really is based on a basket of fiat currenties and is stable as a result, it might not take long for us to refer to the value of things in Libras. A Libra could just be a Libra. That, says Webb, “is a sovereignty game-changer”.

  • If Libra succeeds, it won’t because it’s a real cryptocurrency. It’ll be because it isn’t.

How to think about electric — and autonomous — cars

Lovely, perceptive essay by Benedict Evans. Here’s how it opens…

When Nokia people looked at the first iPhone, they saw a not-great phone with some cool features that they were going to build too, being produced at a small fraction of the volumes they were selling. They shrugged. “No 3G, and just look at the camera!”

When many car company people look at a Tesla, they see a not-great car with some cool features that they’re going to build too, being produced at a small fraction of the volumes they’re selling. “Look at the fit and finish, and the panel gaps, and the tent!”

The Nokia people were terribly, terribly wrong. Are the car people wrong? We hear that a Tesla is ‘the new iPhone’ – what would that mean?

This is partly a question about Tesla, but it’s more interesting as a way to think about what happens when ‘software eats the world’ in general, and when tech moves into new industries. How do we think about whether something is disruptive? If it is, who exactly gets disrupted? And does that disruption that mean one company wins in the new world? Which one?

Well worth reading in full.

Why the arrival, not the journey, matters

I have an article on the evolution of the Internet in a new journal — the Journal of Cyber Policy. I was asked to give a talk at the launch last week in Chatham House, home of the Royal Institute of International Affairs in London. Here’s my text.

One of my favourite autobiographies is that of Leonard Woolf, the saintly husband of Virginia. It’s a multi-volume work, but my favourite one is the volume covering the years 1939-1969. It’s entitled The Journey, Not the Arrival, Matters and it came to mind when I was pondering this talk, because in the case of the evolution of digital technology I think it’s the other way round: the arrival, not the journey, matters. And I’d like to explain why.

In 1999, Andy Grove, then the Chief Executive of the chip manufacturer Intel, said something interesting. “In five years’s time”, he declared, “companies that aren’t Internet companies won’t be companies at all”. He was speaking at the peak of the first Internet boom, when irrational exuberance ruled the world, but even so many people though he was nuts. Was the CEO of Intel really saying that all companies needed to be selling information goods by 2004?

In fact, Grove was being characteristically perceptive. What he understood — way back in 1999 — was that the Internet was on its way to becoming a General Purpose Technology or GPT, like mains electricity, and that every organisation in the world would have to adapt to that reality. So on the big story, Andy was right; he was just a bit optimistic on the timing front.

My article in the first issue of the new journal is entitled “The evolution of the Internet”, but the real meat is in the subtitle: “From military experiment to General Purpose Technology”. I say that because as the network has been evolving we have focussed too much on one aspect of its development and impact — namely the production, consumption and exchange of information goods — and too little on the direction of travel, which — as my subtitle implies — is towards becoming a GPT.

Arthur C Clarke is famous for saying that any sufficiently advanced technology is indistinguishable from magic, and for most of its users the Internet already meets that requirement. As Eric Schmidt, Google’s Chairman, once observed, it is the first technology that humans have built that humans do not understand. But while a General Purpose Technology may or may not be incomprehensible to humans, it has impacts which are visible to everyone

This is because GPTs have an impact on the world way beyond the domain in which they first appeared. They are technologies that can affect an entire economy and “have the potential to drastically alter societies through their impact on pre-existing economic and social structures”. Think steam engine, electricity, electronics, the automobile. GPTs have “the potential to reshape the economy and boost productivity across all sectors and industries, like electricity or the automobile”. And these transformations are about far more than simple technical innovation, because they often require the wholesale remaking of infrastructure, business models, and cultural norms. GPTs are the motive forces behind Joseph Schumpeter’s waves of ‘creative destruction’ and in that sense leave almost nothing untouched.

But if, as now seems obvious, the Internet is a GPT, then our societies are only at the beginning of a journey of adaptation, not the end. And this may surprise some people because the Internet is actually rather old technology. How you compute its age depends really on where you define its origins. But if you think — as I do — that it starts with Paul Baran’s concept of a packet-switched mesh in the early 1960s, then it’s now in its mid-fifties.

So you’d have thought that our society would have figured out the significance of the network by now. Sadly, not. And that’s not because we’re short of information and data about it. On the contrary, we are awash with the stuff. Our problem is that we don’t, as a culture, seem to understand it. We remain in that blissful state that Manuel Castells calls “informed Bewilderment”. So a powerful force is loose in our societies and we don’t really understand it. Why is that?

One good reason is that digital technology is incomprehensible to ordinary human beings. In that sense, it’s very different from some GPTs of the past. You didn’t have to be a rocket scientist to understand steam power, for example. You might not know much about Boyle’s Law, but you could readily appreciate that steam could powerfully augment animal muscle power and dramatically speed up travel. But most people have very little idea of what digital technology can — and potentially could — do. And this is getting worse, not better, as encryption, machine-learning and other arcane technologies become commonplace.

Another reason for our bewilderment is that digital technology has some distinctive properties — the posh term for them is ‘affordances’ — that make it very different from the GPTs of the past. Among these affordances are:

  • Zero (or near-zero) marginal costs;
  • Very powerful network effects;
  • The dominance of Power Law statistical distributions (which tend towards winner-takes-all outcomes);
  • Technological lock-in (where a proprietary technology becomes the de-facto technical standard for an entire industry);
  • Intrinsic facilitation of exceedingly fine-grained surveillance; low entry thresholds (which facilitates what some scholars call “permissionless innovation”);
  • A development process characterised by ‘combinatorial innovation’ which can lead to sudden and unexpected new capabilities, and an acceleration in the pace of change and development;
  • And the fact that the ‘material’ that is processed by the technology is information — which is, among other things, the lifeblood of social and cultural life, not to mention of democracy itself.

These affordances make digital technology very different from the GPTs of the past. They’re what led me once, when seeking a pithy summary of the Internet for a lay audience, to describe it as “a global machine for springing surprises”. Many of these surprises have been relatively pleasant — for example the World Wide Web; VoIP (internet telephony); powerful search engines; Wikipedia; social networking services; digital maps. Others have been controversial — for example the file-sharing technologies that overwhelmed the music industry; or the belated discovery (courtesy of Edward Snowden) of the pervasive surveillance enabled by the technology and exploited by governments and corporations. And some surprises — particularly the capabilities for cybercrime, espionage, IP and identity theft, malware, blackmail, harassment, and information warfare — have been worrying and, in some cases, terrifying.

But maybe another reason why we are taken aback by the rise of the Internet is because we have been so dazzled by the technology that we have been infected by the technological determinism that is the prevailing ideology in the reality distortion field known as Silicon Valley. The folks there really do believe that technology drives history, which is why their totemic figures like Marc Andreessen — the guy who co-authored Mosaic, the first proper web browser, and now heads a leading venture capital firm — can utter infantile mantras like “software is eating the world” and not get laughed off the stage.

But technology is only one of the forces that drives history because it doesn’t exist — or come into being — in a vacuum. It exists in a social, cultural, political, economic and ideological context, and it is the resultant of these multifarious forces that determines the direction of travel. So in trying to understand the evolution of the Internet, we need to take these other forces into account.

As far as the Internet is concerned, for example, the things to remember are that, first of all, it was a child of the Cold War; that in its early manifestations it was influenced by a social ethos which had distinct counter-cultural overtones; and that it was only relatively late in its development that it was taken over by the corporate interests and intelligence concerns which now dominate it.

Oh — and I almost forgot — there is that enormous elephant in the room, namely that it was almost entirely an American creation, which perhaps explains why all the world’s major Internet companies — outside of China — are US corporations and thus powerful projectors of American ‘soft power’, a fact which — coincidentally — might help to explain current European fears about these companies.

Just for the avoidance of doubt, though, this is not a rant about American dominance. My personal opinion is that US stewardship of the Internet was largely benign for much of the network’s early history. But such stewardship was only acceptable for as long as the Internet was essentially confined to Western industrialised nations. Once the network became truly global, US dominance was always likely to be challenged. And so it has proved.

Another problem with focussing only on the evolution of the network only in terms of technology is that it leads, inevitably, to a Whig Interpretation of its history — that is to say, a record of inexorable progress. And yet anyone who has ever been involved in such things knows that it’s never like that.

With hindsight, for example, we see packet-switching — the fundamental technology of the network — as an obvious and necessary concept. But, as Janet Abbatte has pointed out in her illuminating history, it wasn’t like that at all. In 1960 packet-switching was an experimental, even controversial, idea; it was very difficult to implement initially and some communications experts (mostly working for AT&T) argued that it would never work at all. With the 20/20 vision of hindsight, these sceptics look foolish. But that’s always the problem with hindsight. At the time, the scepticism of these engineers was so vehement that it led Paul Baran to withdraw his proposal to build an experimental prototype of a packet-switched network, thereby delaying the start of the project by the best part of a decade.

Focussing exclusively on the technology creates other blind spots too. For example, it renders us insensitive to the extent to which the Internet — like all major technologies — was socially constructed. This is how, for example, surveillance became “the business model of the Internet” — as the security expert Bruce Schneier once put it. In this case the root cause was the interaction between a key affordance of the technology — the power of network effects — and Internet users’ pathological reluctance to pay for online services. Since the way to succeed commercially was to “get big fast” and since the quickest way to do that was to offer ‘free’ services, the business model that emerged was one in which users’ personal data and their data-trails were harvested and auctioned to advertisers and ad-brokers.

Thus was born a completely new kind of industrial activity — dubbed “surveillance capitalism” by the Harvard scholar Shosana Zuboff — in which extractive corporations like Google and Facebook mine user data which can then be ‘refined’ (i.e. analysed) and sold to others for targeted advertising and other purposes. Although this kind of spying is technologically easy to implement, it could not have become the basis of huge industrial empires without user consent, or without legal arrangements which discourage assignation of ownership of distributed personal data.

One of the most noticeable things about our public discourse on the Internet is how a-historical it is. This is partly a reflection of the way the tech media work — most journalists who cover the industry are essentially perpetually engaged in “the sociology of the last five minutes,” chasing what Michael Lewis memorably described as The New New Thing. As a result, the underlying seismic shifts caused by the technology seem to go largely unnoticed or misunderstood by the public. Yet when we look back at the story so far, we can spot significant discontinuities.

One such, for example, was the appearance of Craigslist in 1996. It was a website providing free, localised classified advertising which started first in San Francisco and gradually spread to cover cities in 70 countries. For a surprisingly long time, the newspaper industry remained blissfully unaware of its significance. But if journalists had understood their industry better they would have seen the threat clearly.

For newspapers are value chains which link an expensive and loss-making activity called journalism with a profitable activity called classified advertising. But one of the affordances of the Internet is that it dissolves value chains, picking off the profitable bits that it can do better than conventional operations. And classified advertising turned out to be one of the things that the internet could do very well: instead of having to wade through acres of small print looking for that used car of your dreams, you simply typed your requirements into a search engine and Bingo! — there were the results. The end result was that newspapers were left holding only the unprofitable, loss-making, part of their value chains.

“The peace of God,” says the Bible, “passeth all understanding”. So too do the valuations of Internet companies. We saw that in the first Internet boom of 1995-2000 — that extraordinary outbreak of what the economist Robert Schiller dubbed “Irrational Exuberance” and which was later christened the “dot-com bubble”. What fuelled the mania was speculative interest in the stock-market valuation of the multitude of Web-based companies (‘dot-coms’) which materialised following Netscape’s IPO in 1995 and which was amplified by the fantasies of fund managers, stock analysts, journalists and pundits. As one sceptical observer put it, what really happened is that “Wall Street moved West”.

The core business model of these fledgling companies was the idea of harnessing the network effects implicit in the rapid growth of consumer interest in the Internet to obtain a dominant market share in a range of sectors. At the height of the frenzy, dot-com companies with few customers, few (sometimes no) revenues and handfuls of employees briefly enjoyed stock-market valuations greater than those of huge companies like General Motors.

The boom followed the traditional pattern of speculative manias through the centuries, and eventually, in March 2000, it burst. In just over a month the total market capitalisation of companies on the NASDAQ exchange fell from $6.71 trillion to $5.78 trillion. In other words, nearly a trillion dollars in value had been obliterated. And less than half of the dot-coms founded in the boom survived the crash.

But here’s the strange thing: the bubble created much of the technological infrastructure necessary to hasten the maturing of the network. When the mania began, some canny observers quoted the old maxim of the Californian gold rush of the 1850s – that the people who made most money in California were not the miners and prospectors, but the merchants who sold them pickaxes and shovels. The modern embodiments of those merchants were the telecommunications companies which in the 1990s invested heavily in building large fibre-optic cable networks and server farms to service the ‘new’ economy that was apparently coming into being. When the bubble burst, these companies were left with apparently unwanted assets, and some went bankrupt. But the infrastructure that they had built remained, and turned out to be critical for enabling what came next.

The interesting thing is that — to those who know their economic history — this is an old story. Brad DeLong points out, for example, that the ‘railway mania’ of the 19th century lost investors a lot of money, but the extensiveness of the railway network that was the product of the frenzy enabled completely new industries to be built. It was the completion of the railway network, for example, that enabled the rise of the mail-order industry — which for two generations was a licence to print money in the United States.

Similarly with the Internet. While the bubble caused a financial crash, it also resulted in a massive expansion in the communications infrastructure needed to turn the network into a ubiquitous public utility — a General Purpose Technology — much as happened with railway networks in the late 19th century. So now the internet is mature and extensive enough to serve as a foundation on which new kinds of innovation – much of it in areas apparently unrelated to information goods – can be built. In that context, it’s conceivable that enterprises like the cab-hailing application Uber, or the room-hiring service Airbnb may turn out to be the contemporary equivalent of the mail-order services of the 19th century: unthinkable before the technology and unremarkable afterwards.

We’ve taken a long time to get here, but we’ve made it. Now all we have to do is figure out how to deal with it. Which is why I say that the arrival, not the journey, matters.

Thank you.

One funeral at a time

This morning’s Observer column:

Science advances, said the great German physicist Max Planck, “one funeral at a time”. Actually, this is a paraphrase of what he really said, which was: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” But you get the drift.

I always think of Planck’s aphorism whenever moral panic breaks out over the supposedly dizzying pace of technological change…

Read on

HMG wakes up to the potential of blockchain technology

This morning’s Observer column:

There are not many occasions when one can give an unqualified thumbs-up to something the government does, but this is one such occasion. Last week, Sir Mark Walport, the government’s chief scientific adviser, published a report with the forbidding title Distributed Ledger Technology: Beyond Block Chain. The report sets out the findings of an official study that explores how the aforementioned technology “can revolutionise services, both in government and the private sector”. Since this is the kind of talk one normally hears from loopy startup founders pitching to venture capitalists rather than from sober Whitehall mandarins, it made this columnist choke on his muesli – especially given that, in so far as Joe Public thinks about distributed ledgers at all, it is in the context of Bitcoin, money laundering and online drug dealing. So what, one is tempted to ask, has the chief scientific adviser been smoking?

Read on

Uber, disruption and Clayton Christensen

This morning’s Observer column:

Over the decades, “disruptive innovation” evolved into Silicon Valley’s highest aspiration. (It also fitted nicely with the valley’s attachment to Joseph Schumpeter’s idea about capitalism renewing itself in waves of “creative destruction”.) And, as often happens with soi-disant Big Ideas, Christensen’s insight has been debased by overuse. This, of course, does not please the Master, who is offended by ignorant jerks miming profundity by plagiarising his ideas.

Which brings us to an interesting article by Christensen and two of his academic colleagues in the current issue of the Harvard Business Review. It’s entitled “What Is Disruptive Innovation?” and in it the authors explain, in the soothing tones used by great minds when dealing with those of inferior intelligence, the essence of Christensen’s original concept. The article is eminently readable and cogent, but contains nothing new, so one begins to wonder what could be the peg for going over this particular piece of ground. And why now?

And then comes the answer: Uber. Christensen & co are obviously irritated by the valley’s conviction that the car-hailing service is a paradigm of disruptive innovation and so they devote a chunk of their article to arguing that while Uber might be disruptive – in the sense of being intensely annoying to the incumbents of the traditional taxi-cab industry – it is not a disruptive innovation in the Christensen sense…

Read on