Digital Dominance: forget the ‘digital’ bit

Some reflections on the symposium on “Digital Dominance: Implications and Risks” held by the LSE Media Policy Project on July 8, 2016.

In thinking about the dominance of the digital giants1 we are ‘skating to where the puck has been’ rather than to where it is headed. It’s understandable that scholars who are primarily interested in questions like media power, censorship and freedom of expression should focus on the impact that these companies are having on the public sphere (and therefore on democracy). And these questions are undoubtedly important. But this focus, in a way, reflects a kind of parochialism that the companies themselves do not share. For they are not really interested in our information ecosystem per se, nor in democracy either, if it comes to that. They have bigger fish to fry.

How come? Well, there are two reasons. The first is that although those of us who work in media and education may not like to admit it, our ‘industries’ are actually pretty small beer in industrial terms. They pale into insignificance compared with, say, healthcare, energy or transportation. Secondly, surveillance capitalism, the business model of the two ‘pure’ digital companies — Google and Facebook — is probably built on an unsustainable foundation, namely the mining, processing, analysis and sale of humanity’s digital exhaust. Their continued growth depends on a constant increase in the supply of this incredibly valuable (and free) feedstock. But if people, for one reason or another, were to decide that they would prefer to be doing something other than incessantly checking their phones, Googling or updating their social media statuses, then the evaporation of those companies’ stock market valuations would be a sight to behold. And while one can argue that such an outcome seems implausible, because of network effects and other factors, then a glance at the history of the IT industry might give you pause for thought.

The folks who run these companies understand this. For if there is one thing that characterizes the leaders of Google and Facebook it is their determination to take the long, strategic view. This is partly a matter of temperament, but it is powerfully boosted by the way their companies are structured: the founders hold the ‘golden shares’ which ensures their continued control, regardless of the opinions of Wall Street analysts or ordinary shareholders. So if you own Google or Facebook stock and you don’t like what Larry Page or Mark Zuckerberg are up to, then your only option is to dispose of your shares.

Being strategic thinkers, these corporate bosses are positioning their organizations to make the leap from the relatively small ICT industry into the much bigger worlds of healthcare, energy and transportation. That’s why Google, for example, has significant investments in each of these sectors. Underpinning these commitments is an understanding that their unique mastery of cloud computing, big data analytics, sensor technology, machine learning and artificial intelligence will enable them to disrupt established industries and ways of working in these sectors and thereby greatly widen their industrial bases. So in that sense mastery of the ‘digital’ is just a means to much bigger ends. This is where the puck is headed.

So, in a way, Martin Moore’s comparison2 of the digital giants of today with the great industrial trusts of the early 20th century is apt. But it underestimates the extent of the challenges we are about to face, for our contemporary versions of these behemoths are likely to become significantly more powerful, and therefore even more worrying for democracy.


  1. Or GAFA — Google, Apple, Facebook, Amazon — as our Continental friends call them, incorrectly in my view: Apple and Amazon are significantly different from the two ‘pure’ digital outfits. 

  2. Tech Giants and Civic Power, King’s College London, 2016. 

Why the arrival, not the journey, matters

I have an article on the evolution of the Internet in a new journal — the Journal of Cyber Policy. I was asked to give a talk at the launch last week in Chatham House, home of the Royal Institute of International Affairs in London. Here’s my text.


One of my favourite autobiographies is that of Leonard Woolf, the saintly husband of Virginia. It’s a multi-volume work, but my favourite one is the volume covering the years 1939-1969. It’s entitled The Journey, Not the Arrival, Matters and it came to mind when I was pondering this talk, because in the case of the evolution of digital technology I think it’s the other way round: the arrival, not the journey, matters. And I’d like to explain why.

In 1999, Andy Grove, then the Chief Executive of the chip manufacturer Intel, said something interesting. “In five years’s time”, he declared, “companies that aren’t Internet companies won’t be companies at all”. He was speaking at the peak of the first Internet boom, when irrational exuberance ruled the world, but even so many people though he was nuts. Was the CEO of Intel really saying that all companies needed to be selling information goods by 2004?

In fact, Grove was being characteristically perceptive. What he understood — way back in 1999 — was that the Internet was on its way to becoming a General Purpose Technology or GPT, like mains electricity, and that every organisation in the world would have to adapt to that reality. So on the big story, Andy was right; he was just a bit optimistic on the timing front.

My article in the first issue of the new journal is entitled “The evolution of the Internet”, but the real meat is in the subtitle: “From military experiment to General Purpose Technology”. I say that because as the network has been evolving we have focussed too much on one aspect of its development and impact — namely the production, consumption and exchange of information goods — and too little on the direction of travel, which — as my subtitle implies — is towards becoming a GPT.

Arthur C Clarke is famous for saying that any sufficiently advanced technology is indistinguishable from magic, and for most of its users the Internet already meets that requirement. As Eric Schmidt, Google’s Chairman, once observed, it is the first technology that humans have built that humans do not understand. But while a General Purpose Technology may or may not be incomprehensible to humans, it has impacts which are visible to everyone

This is because GPTs have an impact on the world way beyond the domain in which they first appeared. They are technologies that can affect an entire economy and “have the potential to drastically alter societies through their impact on pre-existing economic and social structures”. Think steam engine, electricity, electronics, the automobile. GPTs have “the potential to reshape the economy and boost productivity across all sectors and industries, like electricity or the automobile”. And these transformations are about far more than simple technical innovation, because they often require the wholesale remaking of infrastructure, business models, and cultural norms. GPTs are the motive forces behind Joseph Schumpeter’s waves of ‘creative destruction’ and in that sense leave almost nothing untouched.

But if, as now seems obvious, the Internet is a GPT, then our societies are only at the beginning of a journey of adaptation, not the end. And this may surprise some people because the Internet is actually rather old technology. How you compute its age depends really on where you define its origins. But if you think — as I do — that it starts with Paul Baran’s concept of a packet-switched mesh in the early 1960s, then it’s now in its mid-fifties.

So you’d have thought that our society would have figured out the significance of the network by now. Sadly, not. And that’s not because we’re short of information and data about it. On the contrary, we are awash with the stuff. Our problem is that we don’t, as a culture, seem to understand it. We remain in that blissful state that Manuel Castells calls “informed Bewilderment”. So a powerful force is loose in our societies and we don’t really understand it. Why is that?

One good reason is that digital technology is incomprehensible to ordinary human beings. In that sense, it’s very different from some GPTs of the past. You didn’t have to be a rocket scientist to understand steam power, for example. You might not know much about Boyle’s Law, but you could readily appreciate that steam could powerfully augment animal muscle power and dramatically speed up travel. But most people have very little idea of what digital technology can — and potentially could — do. And this is getting worse, not better, as encryption, machine-learning and other arcane technologies become commonplace.

Another reason for our bewilderment is that digital technology has some distinctive properties — the posh term for them is ‘affordances’ — that make it very different from the GPTs of the past. Among these affordances are:

  • Zero (or near-zero) marginal costs;
  • Very powerful network effects;
  • The dominance of Power Law statistical distributions (which tend towards winner-takes-all outcomes);
  • Technological lock-in (where a proprietary technology becomes the de-facto technical standard for an entire industry);
  • Intrinsic facilitation of exceedingly fine-grained surveillance; low entry thresholds (which facilitates what some scholars call “permissionless innovation”);
  • A development process characterised by ‘combinatorial innovation’ which can lead to sudden and unexpected new capabilities, and an acceleration in the pace of change and development;
  • And the fact that the ‘material’ that is processed by the technology is information — which is, among other things, the lifeblood of social and cultural life, not to mention of democracy itself.

These affordances make digital technology very different from the GPTs of the past. They’re what led me once, when seeking a pithy summary of the Internet for a lay audience, to describe it as “a global machine for springing surprises”. Many of these surprises have been relatively pleasant — for example the World Wide Web; VoIP (internet telephony); powerful search engines; Wikipedia; social networking services; digital maps. Others have been controversial — for example the file-sharing technologies that overwhelmed the music industry; or the belated discovery (courtesy of Edward Snowden) of the pervasive surveillance enabled by the technology and exploited by governments and corporations. And some surprises — particularly the capabilities for cybercrime, espionage, IP and identity theft, malware, blackmail, harassment, and information warfare — have been worrying and, in some cases, terrifying.

But maybe another reason why we are taken aback by the rise of the Internet is because we have been so dazzled by the technology that we have been infected by the technological determinism that is the prevailing ideology in the reality distortion field known as Silicon Valley. The folks there really do believe that technology drives history, which is why their totemic figures like Marc Andreessen — the guy who co-authored Mosaic, the first proper web browser, and now heads a leading venture capital firm — can utter infantile mantras like “software is eating the world” and not get laughed off the stage.

But technology is only one of the forces that drives history because it doesn’t exist — or come into being — in a vacuum. It exists in a social, cultural, political, economic and ideological context, and it is the resultant of these multifarious forces that determines the direction of travel. So in trying to understand the evolution of the Internet, we need to take these other forces into account.

As far as the Internet is concerned, for example, the things to remember are that, first of all, it was a child of the Cold War; that in its early manifestations it was influenced by a social ethos which had distinct counter-cultural overtones; and that it was only relatively late in its development that it was taken over by the corporate interests and intelligence concerns which now dominate it.

Oh — and I almost forgot — there is that enormous elephant in the room, namely that it was almost entirely an American creation, which perhaps explains why all the world’s major Internet companies — outside of China — are US corporations and thus powerful projectors of American ‘soft power’, a fact which — coincidentally — might help to explain current European fears about these companies.

Just for the avoidance of doubt, though, this is not a rant about American dominance. My personal opinion is that US stewardship of the Internet was largely benign for much of the network’s early history. But such stewardship was only acceptable for as long as the Internet was essentially confined to Western industrialised nations. Once the network became truly global, US dominance was always likely to be challenged. And so it has proved.

Another problem with focussing only on the evolution of the network only in terms of technology is that it leads, inevitably, to a Whig Interpretation of its history — that is to say, a record of inexorable progress. And yet anyone who has ever been involved in such things knows that it’s never like that.

With hindsight, for example, we see packet-switching — the fundamental technology of the network — as an obvious and necessary concept. But, as Janet Abbatte has pointed out in her illuminating history, it wasn’t like that at all. In 1960 packet-switching was an experimental, even controversial, idea; it was very difficult to implement initially and some communications experts (mostly working for AT&T) argued that it would never work at all. With the 20/20 vision of hindsight, these sceptics look foolish. But that’s always the problem with hindsight. At the time, the scepticism of these engineers was so vehement that it led Paul Baran to withdraw his proposal to build an experimental prototype of a packet-switched network, thereby delaying the start of the project by the best part of a decade.

Focussing exclusively on the technology creates other blind spots too. For example, it renders us insensitive to the extent to which the Internet — like all major technologies — was socially constructed. This is how, for example, surveillance became “the business model of the Internet” — as the security expert Bruce Schneier once put it. In this case the root cause was the interaction between a key affordance of the technology — the power of network effects — and Internet users’ pathological reluctance to pay for online services. Since the way to succeed commercially was to “get big fast” and since the quickest way to do that was to offer ‘free’ services, the business model that emerged was one in which users’ personal data and their data-trails were harvested and auctioned to advertisers and ad-brokers.

Thus was born a completely new kind of industrial activity — dubbed “surveillance capitalism” by the Harvard scholar Shosana Zuboff — in which extractive corporations like Google and Facebook mine user data which can then be ‘refined’ (i.e. analysed) and sold to others for targeted advertising and other purposes. Although this kind of spying is technologically easy to implement, it could not have become the basis of huge industrial empires without user consent, or without legal arrangements which discourage assignation of ownership of distributed personal data.

One of the most noticeable things about our public discourse on the Internet is how a-historical it is. This is partly a reflection of the way the tech media work — most journalists who cover the industry are essentially perpetually engaged in “the sociology of the last five minutes,” chasing what Michael Lewis memorably described as The New New Thing. As a result, the underlying seismic shifts caused by the technology seem to go largely unnoticed or misunderstood by the public. Yet when we look back at the story so far, we can spot significant discontinuities.

One such, for example, was the appearance of Craigslist in 1996. It was a website providing free, localised classified advertising which started first in San Francisco and gradually spread to cover cities in 70 countries. For a surprisingly long time, the newspaper industry remained blissfully unaware of its significance. But if journalists had understood their industry better they would have seen the threat clearly.

For newspapers are value chains which link an expensive and loss-making activity called journalism with a profitable activity called classified advertising. But one of the affordances of the Internet is that it dissolves value chains, picking off the profitable bits that it can do better than conventional operations. And classified advertising turned out to be one of the things that the internet could do very well: instead of having to wade through acres of small print looking for that used car of your dreams, you simply typed your requirements into a search engine and Bingo! — there were the results. The end result was that newspapers were left holding only the unprofitable, loss-making, part of their value chains.

“The peace of God,” says the Bible, “passeth all understanding”. So too do the valuations of Internet companies. We saw that in the first Internet boom of 1995-2000 — that extraordinary outbreak of what the economist Robert Schiller dubbed “Irrational Exuberance” and which was later christened the “dot-com bubble”. What fuelled the mania was speculative interest in the stock-market valuation of the multitude of Web-based companies (‘dot-coms’) which materialised following Netscape’s IPO in 1995 and which was amplified by the fantasies of fund managers, stock analysts, journalists and pundits. As one sceptical observer put it, what really happened is that “Wall Street moved West”.

The core business model of these fledgling companies was the idea of harnessing the network effects implicit in the rapid growth of consumer interest in the Internet to obtain a dominant market share in a range of sectors. At the height of the frenzy, dot-com companies with few customers, few (sometimes no) revenues and handfuls of employees briefly enjoyed stock-market valuations greater than those of huge companies like General Motors.

The boom followed the traditional pattern of speculative manias through the centuries, and eventually, in March 2000, it burst. In just over a month the total market capitalisation of companies on the NASDAQ exchange fell from $6.71 trillion to $5.78 trillion. In other words, nearly a trillion dollars in value had been obliterated. And less than half of the dot-coms founded in the boom survived the crash.

But here’s the strange thing: the bubble created much of the technological infrastructure necessary to hasten the maturing of the network. When the mania began, some canny observers quoted the old maxim of the Californian gold rush of the 1850s – that the people who made most money in California were not the miners and prospectors, but the merchants who sold them pickaxes and shovels. The modern embodiments of those merchants were the telecommunications companies which in the 1990s invested heavily in building large fibre-optic cable networks and server farms to service the ‘new’ economy that was apparently coming into being. When the bubble burst, these companies were left with apparently unwanted assets, and some went bankrupt. But the infrastructure that they had built remained, and turned out to be critical for enabling what came next.

The interesting thing is that — to those who know their economic history — this is an old story. Brad DeLong points out, for example, that the ‘railway mania’ of the 19th century lost investors a lot of money, but the extensiveness of the railway network that was the product of the frenzy enabled completely new industries to be built. It was the completion of the railway network, for example, that enabled the rise of the mail-order industry — which for two generations was a licence to print money in the United States.

Similarly with the Internet. While the bubble caused a financial crash, it also resulted in a massive expansion in the communications infrastructure needed to turn the network into a ubiquitous public utility — a General Purpose Technology — much as happened with railway networks in the late 19th century. So now the internet is mature and extensive enough to serve as a foundation on which new kinds of innovation – much of it in areas apparently unrelated to information goods – can be built. In that context, it’s conceivable that enterprises like the cab-hailing application Uber, or the room-hiring service Airbnb may turn out to be the contemporary equivalent of the mail-order services of the 19th century: unthinkable before the technology and unremarkable afterwards.

We’ve taken a long time to get here, but we’ve made it. Now all we have to do is figure out how to deal with it. Which is why I say that the arrival, not the journey, matters.

Thank you.

Big data: the new gasoline

This morning’s Observer column:

“Data is the new oil,” declared Clive Humby, a mathematician who was the genius behind the Tesco Clubcard. This insight was later elaborated by Michael Palmer of the Association of National Advertisers. “Data is just like crude [oil],” said Palmer. “It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analysed for it to have value.”

There was just one thing wrong with the metaphor. Oil is a natural resource; it has to be found, drilled for and pumped from the bowels of the Earth. Data, in contrast, is a highly unnatural resource. It has to be created before it can be extracted and refined. Which raises the question of who, exactly, creates this magical resource? Answer: you and me…

Read on

If the EU doesn’t take on Google, who will?

Last Sunday’s Observer column:

Last week, the European commission, that bete noire of Messrs Gove, Johnson & co, resumed its attack on Google. On Wednesday, Eurocrats filed formal charges against the company, accusing it of abusing its dominance of the Android operating system, which is currently the world’s most-used mobile operating system software. This new charge comes on top of an earlier case in which the commission accused Google of abusing its overwhelming dominance of the web-search market in Europe in order to favour its own enterprises over those of competitors.

This could be a big deal. If the commission decides that Google has indeed broken European competition law, then it can levy fines of up to 10% of the company’s annual global revenue for each of the charges. Given that Google’s global sales last year came to nearly $75bn, we’re talking about a possible fine of $15bn (£10.5bn). Even by Google standards, that’s serious money. And it’s not exactly an idle threat: in the past, the Eurocrats have taken more than a billion dollars off both Microsoft and Intel for such violations.

To those of us who follow these things, there’s a whiff of Back to the Future here.

Read on

Zuckerbergus Imperator

This morning’s Observer column:

Power and money are the two great aphrodisiacs, and few people or institutions are immune to their attractions. Not even the Economist, a posh magazine which resolutely sees itself as floating above the vulgar ruckus of journalistic hackery. Last week, like an elderly dowager seduced by Justin Bieber, the venerable publication checked its collective brains at the door and swooned over Mark Zuckerberg, the infant prodigy who now presides over Facebook, and so possesses both power and money.

For the cover illustration, the magazine photoshopped a picture of a celebrated statue of Emperor Constantine the Great (272-337). Young Zuckerberg’s head, adorned with a wreath of gold laurel leaves, replaced Constantine’s. The sword in his left hand was replaced by a Facebook logo, and the emperor’s languidly drooping right hand was rotated 180 degrees so that it now gave the thumbs-up that is Facebook’s “like” symbol. (The gesture had a rather different interpretation in Roman times.) On the plinth of the statue were the words: “MARCVS ZVCKERBERGVS” and CONIVNGE ET IMPERA”, which is the nearest the photoshopper could get to “connect and rule”.

On inside pages one finds an editorial and a long article explaining why Marcvs Z is the greatest thing since Constantine.

Read on

Ad-blocking hypocrisy

NYT_tracking_hypocrisy

This interesting illustration comes from a typically-insightful piece by Doc Searls about the blight that covert web-tracking has unleashed on the Web. Interestingly, he points out that the trackers are, in fact, not important for the Times.

Those four tracking-protecting systems (RedMorph, Privacy Badger, Ghostery and Disconnect) would all have given green lights to the Times if the paper just ran ads that aren’t based on tracking. You know, like the ones they run in print. Advertisers would still reach the Times’ desirable readers. And signaling to readers by advertisers would be clear and uncontaminated by the shitty practices that now pollute the whole digital media environment.

Great stuff. Worth reading in full.

Corporate logic

Apple has over $200B in cash, and yet it borrows money to fund buy-backs of its shares — to keep its investors happy. How come?

Simple, says the NYT:

Mr. Maestri [Apple’s CFO] said that Apple would continue to raise money in debt markets in the United States and abroad to continue to return money to investors in the form of dividends and stock buybacks. Because Apple houses the majority of its $216 billion in cash overseas, it has borrowed money over the last three years to pay out more than $9 billion to investors.

And why is that $216B housed overseas? Equally simple: if Apple repatriated it to the US, it would have to pay tax.

Uber, disruption and Clayton Christensen

This morning’s Observer column:

Over the decades, “disruptive innovation” evolved into Silicon Valley’s highest aspiration. (It also fitted nicely with the valley’s attachment to Joseph Schumpeter’s idea about capitalism renewing itself in waves of “creative destruction”.) And, as often happens with soi-disant Big Ideas, Christensen’s insight has been debased by overuse. This, of course, does not please the Master, who is offended by ignorant jerks miming profundity by plagiarising his ideas.

Which brings us to an interesting article by Christensen and two of his academic colleagues in the current issue of the Harvard Business Review. It’s entitled “What Is Disruptive Innovation?” and in it the authors explain, in the soothing tones used by great minds when dealing with those of inferior intelligence, the essence of Christensen’s original concept. The article is eminently readable and cogent, but contains nothing new, so one begins to wonder what could be the peg for going over this particular piece of ground. And why now?

And then comes the answer: Uber. Christensen & co are obviously irritated by the valley’s conviction that the car-hailing service is a paradigm of disruptive innovation and so they devote a chunk of their article to arguing that while Uber might be disruptive – in the sense of being intensely annoying to the incumbents of the traditional taxi-cab industry – it is not a disruptive innovation in the Christensen sense…

Read on

Algorithmic-driven markets and the future

This morning’s Observer column:

‘When a true genius appears,” wrote Jonathan Swift, “you can know him by this sign: that all the dunces are in a confederacy against him.” We need to update this for our age: whenever a really new technology arrives, you can tell it by the fact that most right-thinking people think it’s a scam.

Thus, to the average person the idea of a “cryptocurrency” like Bitcoin seems daft. I mean to say: a “currency” that was invented by a geek; is not backed by any bank or government; has no central authority; and operates on the basis of a public ledger that is secured by arcane cryptography. It has to be a scam, doesn’t it? Well, actually it doesn’t – but it would take more space than is available here to explain why. The point is that most people can’t see the point of cryptocurrencies, which, paradoxically, is why they are interesting.

On the other hand, most people – non-geeks as well as geeks – can see the point of Uber, the cab-hailing service that is causing such turmoil on the other side of the Channel (and occasionally over here too). You download an app to your smartphone. When you need a cab you launch the app and it shows you on a map where the nearest available cars are, and you hail the nearest one. Within three to five minutes it shows up. And when you arrive at your destination, you don’t pay the driver: the fare is charged to your credit card. QED.

Compared with currencies, therefore, Uber seems pretty comprehensible…

Read on