Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:
The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.
This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.
Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.
If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?
If you want to understand why fake news will be a hard problem to crack, this is a good place to start.
Much has been made in previous histories of Silicon Valley’s counter-cultural origins. Taplin finds other, less agreeable roots, notably in the writings of Ayn Rand, a flake of Cadbury proportions who had an astonishing impact on many otherwise intelligent individuals. These include Alan Greenspan, the Federal Reserve chairman who presided over events leading to the banking collapse of 2008, and [Peter] Thiel, who made an early fortune out of PayPal and was the first investor in Facebook. Rand believed that “achievement of your happiness is the only moral purpose of your life”. She had no time for altruism, government or anything else that might interfere with capitalism red in tooth and claw.
Neither does Thiel. For him, “competition is for losers”. He believes in investing only in companies that have the potential to become monopolies and he thinks monopolies are good for society. “Americans mythologise competition and credit it with saving us from socialist bread lines,” he once wrote. “Actually, capitalism and competition are opposites. Capitalism is premised on the accumulation of capital, but under perfect competition, all profits get competed away.”
The three great monopolies of the digital world have followed the Thiel playbook and Taplin does a good job of explaining how each of them works and how, strangely, their vast profits are never “competed away”. He also punctures the public image so assiduously fostered by Google and Facebook – that they are basically cool tech companies run by good chaps (and they are still mainly chaps, btw) who are hellbent on making the world a better place – whereas, in fact, they are increasingly hard to distinguish from the older brutes of the capitalist jungle…
This morning’s Observer column:
And so the advertisers’ money, diverted from print and TV, cascaded into the coffers of Google and co. In 2012, Procter & Gamble announced that it would make $1bn in savings by targeting consumers through digital and social media. It has got to the point where, according to last week’s Financial Times, 2017 will be the year when advertisers spend more online than they do on TV.
Trebles all round, then? Not quite. It turns out that the advertising industry is beginning to smell a rat in this hi-tech nirvana. In a speech to the annual conference of the Internet Advertising Bureau in January, the Procter & Gamble boss, Marc Pritchard, said this: “We have seen an exponential increase in, well… crap. Craft or crap? Technology enables both and all too often the outcome has been more crappy advertising accompanied by even crappier viewing experiences… is it any wonder ad blockers are growing 40%?”
But the exponential growth in crap is not the biggest problem, he said. Much more worrying was the return of the Wanamaker problem: how many people are actually seeing these ads?
This neat formulation from a 2014 essay by Shoshanna Zuboff:
We often hear that our privacy rights have been eroded and secrecy has grown. But that way of framing things obscures what’s really at stake. Privacy hasn’t been eroded. It’s been expropriated. The difference in framing provides new ways to define the problem and consider solutions.
In the conventional telling, privacy and secrecy are treated as opposites. In fact, one is a cause and the other is an effect. Exercising our right to privacy leads to choice. We can choose to keep something secret or to share it, but we only have that choice when we first have privacy. Privacy rights confer decision rights. Privacy lets us decide where we want to be on the spectrum between secrecy and transparency in each situation. Secrecy is the effect; privacy is the cause.
I suggest that privacy rights have not been eroded, if anything they’ve multiplied. The difference now is how these rights are distributed. Instead of many people having some privacy rights, nearly all the rights have been concentrated in the hands of a few. On the one hand, we have lost the ability to choose what we keep secret, and what we share. On the other, Google, the NSA, and others in the new zone have accumulated privacy rights. How? Most of their rights have come from taking ours without asking. But they also manufactured new rights for themselves, the way a forger might print currency. They assert a right to privacy with respect to their surveillance tactics and then exercise their choice to keep those tactics secret.
We need more writing like this. On the phony ‘privacy vs security’ question, for example.
As George Lakoff pointed out many years ago (but only right-wingers listened), creative framing is the way to win both arguments and votes.
Some reflections on the symposium on “Digital Dominance: Implications and Risks” held by the LSE Media Policy Project on July 8, 2016.
In thinking about the dominance of the digital giants1 we are ‘skating to where the puck has been’ rather than to where it is headed. It’s understandable that scholars who are primarily interested in questions like media power, censorship and freedom of expression should focus on the impact that these companies are having on the public sphere (and therefore on democracy). And these questions are undoubtedly important. But this focus, in a way, reflects a kind of parochialism that the companies themselves do not share. For they are not really interested in our information ecosystem per se, nor in democracy either, if it comes to that. They have bigger fish to fry.
How come? Well, there are two reasons. The first is that although those of us who work in media and education may not like to admit it, our ‘industries’ are actually pretty small beer in industrial terms. They pale into insignificance compared with, say, healthcare, energy or transportation. Secondly, surveillance capitalism, the business model of the two ‘pure’ digital companies — Google and Facebook — is probably built on an unsustainable foundation, namely the mining, processing, analysis and sale of humanity’s digital exhaust. Their continued growth depends on a constant increase in the supply of this incredibly valuable (and free) feedstock. But if people, for one reason or another, were to decide that they would prefer to be doing something other than incessantly checking their phones, Googling or updating their social media statuses, then the evaporation of those companies’ stock market valuations would be a sight to behold. And while one can argue that such an outcome seems implausible, because of network effects and other factors, then a glance at the history of the IT industry might give you pause for thought.
The folks who run these companies understand this. For if there is one thing that characterizes the leaders of Google and Facebook it is their determination to take the long, strategic view. This is partly a matter of temperament, but it is powerfully boosted by the way their companies are structured: the founders hold the ‘golden shares’ which ensures their continued control, regardless of the opinions of Wall Street analysts or ordinary shareholders. So if you own Google or Facebook stock and you don’t like what Larry Page or Mark Zuckerberg are up to, then your only option is to dispose of your shares.
Being strategic thinkers, these corporate bosses are positioning their organizations to make the leap from the relatively small ICT industry into the much bigger worlds of healthcare, energy and transportation. That’s why Google, for example, has significant investments in each of these sectors. Underpinning these commitments is an understanding that their unique mastery of cloud computing, big data analytics, sensor technology, machine learning and artificial intelligence will enable them to disrupt established industries and ways of working in these sectors and thereby greatly widen their industrial bases. So in that sense mastery of the ‘digital’ is just a means to much bigger ends. This is where the puck is headed.
So, in a way, Martin Moore’s comparison2 of the digital giants of today with the great industrial trusts of the early 20th century is apt. But it underestimates the extent of the challenges we are about to face, for our contemporary versions of these behemoths are likely to become significantly more powerful, and therefore even more worrying for democracy.
I have an article on the evolution of the Internet in a new journal — the Journal of Cyber Policy. I was asked to give a talk at the launch last week in Chatham House, home of the Royal Institute of International Affairs in London. Here’s my text.
One of my favourite autobiographies is that of Leonard Woolf, the saintly husband of Virginia. It’s a multi-volume work, but my favourite one is the volume covering the years 1939-1969. It’s entitled The Journey, Not the Arrival, Matters and it came to mind when I was pondering this talk, because in the case of the evolution of digital technology I think it’s the other way round: the arrival, not the journey, matters. And I’d like to explain why.
In 1999, Andy Grove, then the Chief Executive of the chip manufacturer Intel, said something interesting. “In five years’s time”, he declared, “companies that aren’t Internet companies won’t be companies at all”. He was speaking at the peak of the first Internet boom, when irrational exuberance ruled the world, but even so many people though he was nuts. Was the CEO of Intel really saying that all companies needed to be selling information goods by 2004?
In fact, Grove was being characteristically perceptive. What he understood — way back in 1999 — was that the Internet was on its way to becoming a General Purpose Technology or GPT, like mains electricity, and that every organisation in the world would have to adapt to that reality. So on the big story, Andy was right; he was just a bit optimistic on the timing front.
My article in the first issue of the new journal is entitled “The evolution of the Internet”, but the real meat is in the subtitle: “From military experiment to General Purpose Technology”. I say that because as the network has been evolving we have focussed too much on one aspect of its development and impact — namely the production, consumption and exchange of information goods — and too little on the direction of travel, which — as my subtitle implies — is towards becoming a GPT.
Arthur C Clarke is famous for saying that any sufficiently advanced technology is indistinguishable from magic, and for most of its users the Internet already meets that requirement. As Eric Schmidt, Google’s Chairman, once observed, it is the first technology that humans have built that humans do not understand. But while a General Purpose Technology may or may not be incomprehensible to humans, it has impacts which are visible to everyone
This is because GPTs have an impact on the world way beyond the domain in which they first appeared. They are technologies that can affect an entire economy and “have the potential to drastically alter societies through their impact on pre-existing economic and social structures”. Think steam engine, electricity, electronics, the automobile. GPTs have “the potential to reshape the economy and boost productivity across all sectors and industries, like electricity or the automobile”. And these transformations are about far more than simple technical innovation, because they often require the wholesale remaking of infrastructure, business models, and cultural norms. GPTs are the motive forces behind Joseph Schumpeter’s waves of ‘creative destruction’ and in that sense leave almost nothing untouched.
But if, as now seems obvious, the Internet is a GPT, then our societies are only at the beginning of a journey of adaptation, not the end. And this may surprise some people because the Internet is actually rather old technology. How you compute its age depends really on where you define its origins. But if you think — as I do — that it starts with Paul Baran’s concept of a packet-switched mesh in the early 1960s, then it’s now in its mid-fifties.
So you’d have thought that our society would have figured out the significance of the network by now. Sadly, not. And that’s not because we’re short of information and data about it. On the contrary, we are awash with the stuff. Our problem is that we don’t, as a culture, seem to understand it. We remain in that blissful state that Manuel Castells calls “informed Bewilderment”. So a powerful force is loose in our societies and we don’t really understand it. Why is that?
One good reason is that digital technology is incomprehensible to ordinary human beings. In that sense, it’s very different from some GPTs of the past. You didn’t have to be a rocket scientist to understand steam power, for example. You might not know much about Boyle’s Law, but you could readily appreciate that steam could powerfully augment animal muscle power and dramatically speed up travel. But most people have very little idea of what digital technology can — and potentially could — do. And this is getting worse, not better, as encryption, machine-learning and other arcane technologies become commonplace.
Another reason for our bewilderment is that digital technology has some distinctive properties — the posh term for them is ‘affordances’ — that make it very different from the GPTs of the past. Among these affordances are:
- Zero (or near-zero) marginal costs;
- Very powerful network effects;
- The dominance of Power Law statistical distributions (which tend towards winner-takes-all outcomes);
- Technological lock-in (where a proprietary technology becomes the de-facto technical standard for an entire industry);
- Intrinsic facilitation of exceedingly fine-grained surveillance; low entry thresholds (which facilitates what some scholars call “permissionless innovation”);
- A development process characterised by ‘combinatorial innovation’ which can lead to sudden and unexpected new capabilities, and an acceleration in the pace of change and development;
- And the fact that the ‘material’ that is processed by the technology is information — which is, among other things, the lifeblood of social and cultural life, not to mention of democracy itself.
These affordances make digital technology very different from the GPTs of the past. They’re what led me once, when seeking a pithy summary of the Internet for a lay audience, to describe it as “a global machine for springing surprises”. Many of these surprises have been relatively pleasant — for example the World Wide Web; VoIP (internet telephony); powerful search engines; Wikipedia; social networking services; digital maps. Others have been controversial — for example the file-sharing technologies that overwhelmed the music industry; or the belated discovery (courtesy of Edward Snowden) of the pervasive surveillance enabled by the technology and exploited by governments and corporations. And some surprises — particularly the capabilities for cybercrime, espionage, IP and identity theft, malware, blackmail, harassment, and information warfare — have been worrying and, in some cases, terrifying.
But maybe another reason why we are taken aback by the rise of the Internet is because we have been so dazzled by the technology that we have been infected by the technological determinism that is the prevailing ideology in the reality distortion field known as Silicon Valley. The folks there really do believe that technology drives history, which is why their totemic figures like Marc Andreessen — the guy who co-authored Mosaic, the first proper web browser, and now heads a leading venture capital firm — can utter infantile mantras like “software is eating the world” and not get laughed off the stage.
But technology is only one of the forces that drives history because it doesn’t exist — or come into being — in a vacuum. It exists in a social, cultural, political, economic and ideological context, and it is the resultant of these multifarious forces that determines the direction of travel. So in trying to understand the evolution of the Internet, we need to take these other forces into account.
As far as the Internet is concerned, for example, the things to remember are that, first of all, it was a child of the Cold War; that in its early manifestations it was influenced by a social ethos which had distinct counter-cultural overtones; and that it was only relatively late in its development that it was taken over by the corporate interests and intelligence concerns which now dominate it.
Oh — and I almost forgot — there is that enormous elephant in the room, namely that it was almost entirely an American creation, which perhaps explains why all the world’s major Internet companies — outside of China — are US corporations and thus powerful projectors of American ‘soft power’, a fact which — coincidentally — might help to explain current European fears about these companies.
Just for the avoidance of doubt, though, this is not a rant about American dominance. My personal opinion is that US stewardship of the Internet was largely benign for much of the network’s early history. But such stewardship was only acceptable for as long as the Internet was essentially confined to Western industrialised nations. Once the network became truly global, US dominance was always likely to be challenged. And so it has proved.
Another problem with focussing only on the evolution of the network only in terms of technology is that it leads, inevitably, to a Whig Interpretation of its history — that is to say, a record of inexorable progress. And yet anyone who has ever been involved in such things knows that it’s never like that.
With hindsight, for example, we see packet-switching — the fundamental technology of the network — as an obvious and necessary concept. But, as Janet Abbatte has pointed out in her illuminating history, it wasn’t like that at all. In 1960 packet-switching was an experimental, even controversial, idea; it was very difficult to implement initially and some communications experts (mostly working for AT&T) argued that it would never work at all. With the 20/20 vision of hindsight, these sceptics look foolish. But that’s always the problem with hindsight. At the time, the scepticism of these engineers was so vehement that it led Paul Baran to withdraw his proposal to build an experimental prototype of a packet-switched network, thereby delaying the start of the project by the best part of a decade.
Focussing exclusively on the technology creates other blind spots too. For example, it renders us insensitive to the extent to which the Internet — like all major technologies — was socially constructed. This is how, for example, surveillance became “the business model of the Internet” — as the security expert Bruce Schneier once put it. In this case the root cause was the interaction between a key affordance of the technology — the power of network effects — and Internet users’ pathological reluctance to pay for online services. Since the way to succeed commercially was to “get big fast” and since the quickest way to do that was to offer ‘free’ services, the business model that emerged was one in which users’ personal data and their data-trails were harvested and auctioned to advertisers and ad-brokers.
Thus was born a completely new kind of industrial activity — dubbed “surveillance capitalism” by the Harvard scholar Shosana Zuboff — in which extractive corporations like Google and Facebook mine user data which can then be ‘refined’ (i.e. analysed) and sold to others for targeted advertising and other purposes. Although this kind of spying is technologically easy to implement, it could not have become the basis of huge industrial empires without user consent, or without legal arrangements which discourage assignation of ownership of distributed personal data.
One of the most noticeable things about our public discourse on the Internet is how a-historical it is. This is partly a reflection of the way the tech media work — most journalists who cover the industry are essentially perpetually engaged in “the sociology of the last five minutes,” chasing what Michael Lewis memorably described as The New New Thing. As a result, the underlying seismic shifts caused by the technology seem to go largely unnoticed or misunderstood by the public. Yet when we look back at the story so far, we can spot significant discontinuities.
One such, for example, was the appearance of Craigslist in 1996. It was a website providing free, localised classified advertising which started first in San Francisco and gradually spread to cover cities in 70 countries. For a surprisingly long time, the newspaper industry remained blissfully unaware of its significance. But if journalists had understood their industry better they would have seen the threat clearly.
For newspapers are value chains which link an expensive and loss-making activity called journalism with a profitable activity called classified advertising. But one of the affordances of the Internet is that it dissolves value chains, picking off the profitable bits that it can do better than conventional operations. And classified advertising turned out to be one of the things that the internet could do very well: instead of having to wade through acres of small print looking for that used car of your dreams, you simply typed your requirements into a search engine and Bingo! — there were the results. The end result was that newspapers were left holding only the unprofitable, loss-making, part of their value chains.
“The peace of God,” says the Bible, “passeth all understanding”. So too do the valuations of Internet companies. We saw that in the first Internet boom of 1995-2000 — that extraordinary outbreak of what the economist Robert Schiller dubbed “Irrational Exuberance” and which was later christened the “dot-com bubble”. What fuelled the mania was speculative interest in the stock-market valuation of the multitude of Web-based companies (‘dot-coms’) which materialised following Netscape’s IPO in 1995 and which was amplified by the fantasies of fund managers, stock analysts, journalists and pundits. As one sceptical observer put it, what really happened is that “Wall Street moved West”.
The core business model of these fledgling companies was the idea of harnessing the network effects implicit in the rapid growth of consumer interest in the Internet to obtain a dominant market share in a range of sectors. At the height of the frenzy, dot-com companies with few customers, few (sometimes no) revenues and handfuls of employees briefly enjoyed stock-market valuations greater than those of huge companies like General Motors.
The boom followed the traditional pattern of speculative manias through the centuries, and eventually, in March 2000, it burst. In just over a month the total market capitalisation of companies on the NASDAQ exchange fell from $6.71 trillion to $5.78 trillion. In other words, nearly a trillion dollars in value had been obliterated. And less than half of the dot-coms founded in the boom survived the crash.
But here’s the strange thing: the bubble created much of the technological infrastructure necessary to hasten the maturing of the network. When the mania began, some canny observers quoted the old maxim of the Californian gold rush of the 1850s – that the people who made most money in California were not the miners and prospectors, but the merchants who sold them pickaxes and shovels. The modern embodiments of those merchants were the telecommunications companies which in the 1990s invested heavily in building large fibre-optic cable networks and server farms to service the ‘new’ economy that was apparently coming into being. When the bubble burst, these companies were left with apparently unwanted assets, and some went bankrupt. But the infrastructure that they had built remained, and turned out to be critical for enabling what came next.
The interesting thing is that — to those who know their economic history — this is an old story. Brad DeLong points out, for example, that the ‘railway mania’ of the 19th century lost investors a lot of money, but the extensiveness of the railway network that was the product of the frenzy enabled completely new industries to be built. It was the completion of the railway network, for example, that enabled the rise of the mail-order industry — which for two generations was a licence to print money in the United States.
Similarly with the Internet. While the bubble caused a financial crash, it also resulted in a massive expansion in the communications infrastructure needed to turn the network into a ubiquitous public utility — a General Purpose Technology — much as happened with railway networks in the late 19th century. So now the internet is mature and extensive enough to serve as a foundation on which new kinds of innovation – much of it in areas apparently unrelated to information goods – can be built. In that context, it’s conceivable that enterprises like the cab-hailing application Uber, or the room-hiring service Airbnb may turn out to be the contemporary equivalent of the mail-order services of the 19th century: unthinkable before the technology and unremarkable afterwards.
We’ve taken a long time to get here, but we’ve made it. Now all we have to do is figure out how to deal with it. Which is why I say that the arrival, not the journey, matters.
This morning’s Observer column:
“Data is the new oil,” declared Clive Humby, a mathematician who was the genius behind the Tesco Clubcard. This insight was later elaborated by Michael Palmer of the Association of National Advertisers. “Data is just like crude [oil],” said Palmer. “It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analysed for it to have value.”
There was just one thing wrong with the metaphor. Oil is a natural resource; it has to be found, drilled for and pumped from the bowels of the Earth. Data, in contrast, is a highly unnatural resource. It has to be created before it can be extracted and refined. Which raises the question of who, exactly, creates this magical resource? Answer: you and me…
Last Sunday’s Observer column:
Last week, the European commission, that bete noire of Messrs Gove, Johnson & co, resumed its attack on Google. On Wednesday, Eurocrats filed formal charges against the company, accusing it of abusing its dominance of the Android operating system, which is currently the world’s most-used mobile operating system software. This new charge comes on top of an earlier case in which the commission accused Google of abusing its overwhelming dominance of the web-search market in Europe in order to favour its own enterprises over those of competitors.
This could be a big deal. If the commission decides that Google has indeed broken European competition law, then it can levy fines of up to 10% of the company’s annual global revenue for each of the charges. Given that Google’s global sales last year came to nearly $75bn, we’re talking about a possible fine of $15bn (£10.5bn). Even by Google standards, that’s serious money. And it’s not exactly an idle threat: in the past, the Eurocrats have taken more than a billion dollars off both Microsoft and Intel for such violations.
To those of us who follow these things, there’s a whiff of Back to the Future here.