Lovely, perceptive essay by Benedict Evans. Here’s how it opens…
When Nokia people looked at the first iPhone, they saw a not-great phone with some cool features that they were going to build too, being produced at a small fraction of the volumes they were selling. They shrugged. “No 3G, and just look at the camera!”
When many car company people look at a Tesla, they see a not-great car with some cool features that they’re going to build too, being produced at a small fraction of the volumes they’re selling. “Look at the fit and finish, and the panel gaps, and the tent!”
The Nokia people were terribly, terribly wrong. Are the car people wrong? We hear that a Tesla is ‘the new iPhone’ – what would that mean?
This is partly a question about Tesla, but it’s more interesting as a way to think about what happens when ‘software eats the world’ in general, and when tech moves into new industries. How do we think about whether something is disruptive? If it is, who exactly gets disrupted? And does that disruption that mean one company wins in the new world? Which one?
The rise of the M.B.A. has occurred during precisely the era in which, as anyone who follows business magazines is aware, the content of graduate business training has come under increasing attack. “We have created a monster,” H. Edward Wrapp, of the University of Chicago’s business school, wrote in 1980, in Dun’s Review. “The business schools have done more to insure the success of the Japanese and West German invasion of America than any one thing I can think of.” I’d close every one of the graduate schools of business,” Michael Thomas, an investment banker and author, wrote in The New York Times.
The specific case against business schools is that they have neglected certain skills and outlooks that are essential to America’s commercial renaissance while inculcating values that can do harm. The traditional strength of business education has been to provide students with a broad view of many varied business functions—marketing, finance, production, and so forth. But like sociology and political science, business training has gotten all rapped up in mathematical models and such ideas as can be boiled down to numbers. This shift has led schools to play down two fundamental but hard-to-quantify business imperatives: creating the conditions that will permit the design and production of high-quality goods, and waging the constant struggle to inspire, cajole, discipline, lead, and in general persuade employees to work in common cause.
Every time I see a company buying back its shares rather than investing in R&D and product development, I think of this.
But Fallows’s essay is about much more than this. He sees the rise of ‘credentialism’ as a process that had three roots:
The conversion of jobs into “professions” (see, for example, Mark Twain’s account — in Life on the Mississippi — of the way riverboat captains contrived to form themselves into a professional ‘guild’ to exclude outsiders and incomers). Once, “anyone could declare himself a doctor or a teacher or a lawyer, and the choice about who prospered and who failed would be left to ‘the market’, including people who died after trying to cure their cholera with snake oil. Afterward, those who wanted to enter the professions had to go to school, and once they had their credentials they enjoyed a near-tenured status they had previously been denied.” Business managers, says Fallows, began ‘professionalizing’ about the same time that the other groups did, but their alliance with educational institutions developed more slowly. The new body of knowledge that turned business into a ‘profession’ was created by the rise of huge, complex, integrated corporations. By 1910 graduate schools of business had been established at Dartmouth and Harvard.
The invention of IQ tests and the dawning of the idea that ‘intelligence’ was a single, real, measurable, and unchanging trait that severely limited each person’s occupational choice. IQ testing was the essential tool for replacing nepotism and corruption with a meritocracy. It also marked the beginning of the psychometrics which are now the curse of surveillance capitalism.
the use of government power to influence education by the creation of different educational “tracks” and foster vocational — as well as academic — schools, thereby channelling people toward certain occupations which essentially determined the degree of social mobility they would enjoy in life.
This is a terrific, illuminating essay which takes the long view of the last century or so, and in doing so helps to explain how we arrived at our current predicament.
“Silicon Valley still actually makes things, but less and less. We had an economy that was based on making things first, making chips and then computers, and then making bits of software, and then at some point we started getting everything for free; in quotes, “free.” And it stopped being an economy that made things. It became an economy where people made money by extracting things, by mining data.
So it flipped from a making economy to an extraction economy, and we have all the dysfunction that you would see in a mining site in the third world. Mining economies, extraction economies, are kind of corrupt economies because one person or one company ends up controlling everything.”
Fascinating, rambling interview with stories that sometimes bring one up short. Worth reading (or listening to) in full.
The FT‘s Edward Luce took Henry Kissinger out to lunch. Fascinating interview (behind a paywall) in the weekend edition of the paper. Luce tried manfully to get the old growler to talk about Trump — without much success. But there are two gems in his report.
One was Kissinger’s view on how the world looks to Putin. He embarks, reports Luce,
on a disquisition about Russia’s “almost mystical” tolerance for suffering. His key point is that the west wrongly assumed in the years before Putin annexed Crimea that Russia would accept the west’s rules-based order. Nato misread Russia’s deep-seated craving for respect. “The mistake Nato has made is to think that there is a sort of historic evolution that will march across Eurasia and not to understand that somewhere on that march it will encounter something very different to a Westphalian entity. And for Russia this is a challenge to its identity.”
So, asks Luce, “do you mean that we provoked Putin?” To which Kissinger replies “I do not think Putin is a character like Hitler. He comes out of Dostoyevsky”.
The second gem comes when — eventually — Luce manages to coax something about Trump out of his enigmatic guest.
“I think Trump may be one of those figures in history who appears from time to time to mark the end of an era and to force it to give up its old pretences. It doesn’t necessarily mean that he knows this, or that he is considering any great alternative. It could just be an accident.”
In laying out his vision of betterment in Enlightenment Now, Pinker confronts alternative trends and looming threats for progress only in order to brush them off. He does not take seriously the risk of major catastrophes, such as the collapse of a recent era of peace or the outbreak of a global pandemic, which he believes is easy to magnify beyond reason. As for environmental degradation, humanity will surely find a way to counteract this in time. “As the world has gotten richer,” Pinker explains, “nature has begun to rebound”—as if the failure of a few prophecies of ecological disaster to come to pass on schedule means the planet is infinitely resilient. Once he gets around to acknowledging that climate change is an actual problem, Pinker spends much of his time attacking “climate justice warriors” for their anti-capitalist hysteria.
Lots more in that sceptical vein. Worth reading in full.
I didn’t know Stephen Hawking personally, though I often saw him around and in my early years in Cambridge (late-1960s, long before he was famous) my lab was in the same complex of buildings in Mill Lane as the department where he worked. The buildings had no ramps for wheelchair access at that time, so sometimes I or my fellow-students would help his wife to lift his (non-motorised) wheelchair up the steps into the Department of Applied Mathematics and Theoretical Physics (DAMTP).
At the time, of course, we had no idea of how important he was destined to be. The only clue was when one went into the DAMTP tea-room at 10.30am or 3.30pm (scientific departments have a tradition of gathering for morning coffee and afternoon tea) his wheelchair was always surrounded by a group of devoted graduate students, some of whom acted as his interpreter and wrote the equations on a blackboard when he was giving a lecture. It was clear then that — at least in the rarefied world of cosmologists — he was already a real celebrity.
One of those students was Nathan Myhrvold, who went on to become the Chief Technology Officer of Microsoft and a close colleague of Bill Gates. My hunch is that Nathan was the link that persuaded Gates to endow the Gates Scholars (which is Cambridge’s version of Oxford’s Rhodes Scholars scheme).
For me, the most striking moment in Hawking’s career was when he was elected to the Lucasian Professorship of Mathematics. This was the professorial chair that had once been occupied by Isaac Newton, and it seemed an appropriate recognition of the significance of Hawking’s work.
Indeed, watching Hawking in public and marvelling at his astonishing and (to me) inaccessible brilliance, it was Newton who came to mind, and the statue of him in the chapel of Trinity College, of which Wordsworth wrote in The Prelude:
And from my pillow, looking forth by light
Of moon or favouring stars, I could behold
The Antechapel where the Statue stood
Of Newton, with his prism and his silent face,
The marble index of a Mind for ever
Voyaging through strange seas of Thought, alone.
Many years ago the cultural critic Neil Postman predicted that the future of humanity lay somewhere in the area between the dystopian nightmares of two English writers – George Orwell and Aldous Huxley. Orwell believed that we would be destroyed by the things we fear – surveillance and thought-control; Huxley thought that our undoing would be the things that delight us – that our rulers would twig that entertainment is more efficient than coercion as a means of social control.
Then we invented the internet, a technology that – it turned out – gave us both nightmares at once: comprehensive surveillance by states and corporations on the one hand; and, on the other, a strange kind of passive addiction to devices, apps and services which, like the drug soma in Huxley’s Brave New World, possess “all the advantages of Christianity and alcohol and none of their defects”.
The great irony, of course, is that not all of this was inevitable…
Historians are such spoilsports: they undermine stories that are too good to check. Consider this distressing piece by Anne Goldgar:
Tulip mania was irrational, the story goes. Tulip mania was a frenzy. Everyone in the Netherlands was involved, from chimney-sweeps to aristocrats. The same tulip bulb, or rather tulip future, was traded sometimes 10 times a day. No one wanted the bulbs, only the profits – it was a phenomenon of pure greed. Tulips were sold for crazy prices – the price of houses – and fortunes were won and lost. It was the foolishness of newcomers to the market that set off the crash in February 1637. Desperate bankrupts threw themselves in canals. The government finally stepped in and ceased the trade, but not before the economy of Holland was ruined.
Tulip mania wasn’t irrational. Tulips were a newish luxury product in a country rapidly expanding its wealth and trade networks. Many more people could afford luxuries – and tulips were seen as beautiful, exotic, and redolent of the good taste and learning displayed by well-educated members of the merchant class. Many of those who bought tulips also bought paintings or collected rarities like shells.
Prices rose, because tulips were hard to cultivate in a way that brought out the popular striped or speckled petals, and they were still rare. But it wasn’t irrational to pay a high price for something that was generally considered valuable, and for which the next person might pay even more.
And it wasn’t a ‘frenzy’ either.
Tulip mania wasn’t a frenzy, either. In fact, for much of the period trading was relatively calm, located in taverns and neighbourhoods rather than on the stock exchange. It also became increasingly organised, with companies set up in various towns to grow, buy, and sell, and committees of experts emerged to oversee the trade. Far from bulbs being traded hundreds of times, I never found a chain of buyers longer than five, and most were far shorter.
Oh – and she found no records of anyone throwing themselves into canals.
Sigh. The slaughter of a beautiful meme by ugly facts.
My friend Quentin has — deservedly — been given a Lifetime Achievement Award (called a Lovie after Ada Lovelace) for inventing the webcam. Here’s the presentation speech by Sophie Wilson (who designed the instruction set for the ARM processor and so also helped to shape our networked world):
And here is Quentin’s acceptance speech. He must have been moved by the award, because he briefly blanks as he’s getting into his stride. Normally, he’s the most fluent speaker I know. But note his graceful and witty recovery, once he’s found his notes.
This is IMHO long-overdue recognition for a technology pioneer.
One of my favourite books is The Education of Henry Adams (published in 1918). It’s an extended meditation, written in old age by a scion of one of Boston’s elite families, on how the world had changed in his lifetime, and how his formal education had not prepared him for the events through which he had lived. This education had been grounded in the classics, history and literature, and had rendered him incapable, he said, of dealing with the impact of science and technology.
Re-reading Adams recently left me with the thought that there is now an opening for a similar book, The Education of Mark Zuckerberg. It would have an analogous theme, namely how the hero’s education rendered him incapable of understanding the world into which he was born. For although he was supposed to be majoring in psychology at Harvard, the young Zuckerberg mostly took computer science classes until he started Facebook and dropped out. And it turns out that this half-baked education has left him bewildered and rudderless in a culturally complex and politically polarised world…