From Mary Meeker’s ‘State of the Internet’ report, 2019.
This morning’s Observer column:
Last Monday, at Apple’s Worldwide Developers Conference, the company’s head of software engineering, Craig Federighi, announced that it was terminating iTunes. In one way, the only surprising thing was that Apple had taken so long to reach that decision. It’s been obvious for years that iTunes had become baroquely bloated, a striking anomaly for a company that prides itself on elegant and functional design. So the decision to split the software into three functional units – dealing with music, podcasts and TV apps – seemed both logical and long overdue. But for internet users d’un certain âge (including this columnist) the announcement triggered reflections on personal and tech history.
There’s been music on the internet for a long time. The advent of the compact disc in the early 1980s meant that recorded music went from being analogue to digital. But CD music files were vast – a single CD came in at about 700MB – and for most people, the network was slow. So transferring music from one location to another was not a practical proposition. But then, in 1993, researchers at the Fraunhofer Institute in Germany came up with a way of shrinking audio files by a factor of 10 or more, so that a three-minute music track could be reduced to 3MB without much perceptible loss in quality…
“The third and most vital lesson of the Renaissance is that when things change more quickly, people get left behind more quickly. The Renaissance ended because the first era of global commerce and information revolution led to widening uncertainty and anxiety. The printing revolution provided populists with the means to challenge old authorities and channel the discontent that arose from the highly uneven distribution of the gains and losses from newly globalising commerce and accelerating technological change.
The Renaissance teaches us that progress cannot be taken for granted. The faster things change, the greater of people being left behind. And the greater their anger.
Sound familiar? And then…
Renaissance Florence was famously liberal-minded until a loud demagogue filled in the majority’s silence with rage and bombast. The firebrand preacher Girolamo Savonarola tapped into the fear that citizens felt about the pace of change and growing inequality, as well as the widespread anger toward the rampant corruption of the elite. Seizing on the new capacity for cheap print, he pioneered the political pamphlet, offering his followers the prospect of an afterlife in heaven while their opponents were condemned to hell. His mobilisation of indignation — combined with straightforward thuggery — deposed the Medicis, following which he launched a campaign of public purification, symbolised by the burning of books, cosmetics, jewellery, musical instruments and art, culminating in the 1497 Bonfire of the Vanities”.
Now of course history doesn’t really repeat itself. Still… some of this seems eerily familiar.
Also co-author of *The Age of Discovery:Navigating the Risks and Rewards of Our New Renaissance. ↩
Barely a week goes by without government ministers or MPs warning Facebook, Twitter, Google, YouTube (a subsidiary of Google), Instagram or WhatsApp (both owned by Facebook) that they must do more to prevent radical or dangerous ideas being spread. A “crackdown” is always just around the corner to protect users from harmful content.
Oddly, MPs never wonder whether they might be victims of the same effects of these tools that they, too, use all the time. Why not, though? We keep hearing that it’s a big problem for people to be repeatedly exposed to radical ideas and outspoken extremists. It’s just that for MPs, those tend to be within their own parties rather than on obscure YouTube channels.
“When it’s impossible to distinguish facts from fraud, actual facts lose their power. Dissidents can end up putting their lives on the line to post a picture documenting wrongdoing only to be faced with an endless stream of deliberately misleading claims: that the picture was taken 10 years ago, that it’s from somewhere else, that it’s been doctored.
As we shift from an era when realistic fakes were expensive and hard to create to one where they’re cheap and easy, we will inevitably adjust our norms. In the past, it often made sense to believe something until it was debunked; in the future, for certain information or claims, it will start making sense to assume they are fake. Unless they are verified.”
Photography (in the technical rather than aesthetic sense) was once all about the laws of physics — wavelengths of different kinds of light, quality of lenses, refractive indices, coatings, scattering, colour rendition, depth of field, etc.) And initially, when mobile phones started to have cameras, those laws bore down heavily on them: they had plastic lenses and tiny sensors with poor resolution and light-gathering properties. So the pictures they produced might be useful as mementoes, but were of no practical use to anyone interested in the quality of images. And given the constraints of size and cost imposed by the economics of handset manufacture and marketing there seemed to be nothing much that anyone could do about that.
But this view applied only to hardware. The thing we overlooked is that smartphones were rather powerful handheld computers, and it was possible to write software that could augment — or compensate for — the physical limitations of the cameras.
I vividly remember the first time this occurred to me. It was a glorious late afternoon years ago in Provence and we were taking a friend on a drive round the spectacular Gorges du Verdon. About half-way round we stopped for a drink and stood contemplating the amazing views in the blazing sunlight. I reached for my (high-end) digital camera and fruitlessly struggled (by bracketing exposures) to take some photographs that could straddle the impossibly wide dynamic range of the lighting in the scene .
Then, almost as an afterthought, I took out my iPhone, realised that I had downloaded a HDR app, and so used that. The results were flawed in terms of colour balance, but it was clear that the software had been able to manage the dynamic range that had eluded my conventional camera. It was my introduction to what has become known as computational photography — a technology that has come on in leaps and bounds ever since that evening in Provence. Computational photography, as Benedict Evans puts it in a perceptive essay, ”Cameras that Understand”, means that
“as well as trying to make a better lens and sensor, which are subject to the rules of physics and the size of the phone, we use software (now, mostly, machine learning or ‘AI’) to try to get a better picture out of the raw data coming from the hardware. Hence, Apple launched ‘portrait mode’ on a phone with a dual-lens system but uses software to assemble that data into a single refocused image, and it now offers a version of this on a single-lens phone (as did Google when it copied this feature). In the same way, Google’s new Pixel phone has a ‘night sight’ capability that is all about software, not radically different hardware. The technical quality of the picture you see gets better because of new software as much as because of new hardware.” Most of how this is done is already — or soon will be — invisible to the user. Just as HDR used to involve launching a separate app, it’s now baked into many smartphone cameras, which do it automatically. Evans assumes that much the same will happen with the ‘portrait mode’ and ‘night sight’. All that stuff will be baked into later releases of the cameras.
“This will probably”, writes Evans,
also go several levels further in, as the camera goes better at working out what you’re actually taking a picture of. When you take a photo on a ski slope it will come out perfectly exposed and colour-balanced because the camera knows this is snow and adjusts correctly. Today, portrait mode is doing face detection as well as depth mapping to work out what to focus on; in the future, it will know which of the faces in the frame is your child and set the focus on them”. So we’re heading for a point at which one will have to work really hard to take a (technically) imperfect photo. Which leads one to ask: what’s next?
Evans thinks that a clue lies in the fact that people increasingly use their smartphone cameras as visual notebooks — taking pictures of recipes, conference schedules, train timetables, books and stuff we’d like to buy. Machine learning, he surmises, can do a lot with those kinds of images.
”If there’s a date in this picture, what might that mean? Does this look like a recipe? Is there a book in this photo and can we match it to an Amazon listing? Can we match the handbag to Net a Porter? And so you can imagine a suggestion from your phone: “do you want to add the date in this photo to your diary?” in much the same way that today email programs extract flights or meetings or contact details from emails.“
Apparently Google Lens is already doing something like this on Android phones.
From Farhad Manjoo:
I’ve significantly cut back how much time I spend on Twitter, and — other than to self-servingly promote my articles and engage with my readers — I almost never tweet about the news anymore.
I began pulling back last year — not because I’m morally superior to other journalists but because I worried I was weaker.
I’ve been a Twitter addict since Twitter was founded. For years, I tweeted every ingenious and idiotic thought that came into my head, whenever, wherever; I tweeted from my wedding and during my kids’ births, and there was little more pleasing in life than hanging out on Twitter poring over hot news as it broke.
But Twitter is not that carefree clubhouse for journalism anymore. Instead it is the epicenter of a nonstop information war, an almost comically undermanaged gladiatorial arena where activists and disinformation artists and politicians and marketers gather to target and influence the wider media world.
And journalists should stop paying so much attention to what goes on in this toxic information sewer.
Interesting column by Jack Shafer on Politico:
Setting aside for a moment the fact that Trump and Ocasio-Cortez don’t agree on anything, the two New Yorkers with Queens connections have a lot in common. Both made their political marks as outsiders, collapsing traditional power structures from within to become political celebrities. Both ran thrifty campaigns, substituting news coverage for advertising. Trump proved at the ballot box that Republican voters held no real allegiance among to the usual conservative stands on trade, immigration and foreign policy. Ocasio-Cortez likewise toppled a tenured insider, Joe Crowley, in a primary by catching him coasting.
Both command Twitter brigades in the millions—Ocasio-Cortez 2.63 million (up from 379,000 in July) and Trump 57.7 million—and use their audiences to delight their friends and aggravate their enemies. Ensconced in Washington, the pair has sustained their newsworthiness by jousting against their opposition and their putative allies, and this tension adds to their media appeal. On any given day, there are probably as many high-ranking members of their own party gunning for them as high rankers from the other side of the aisle. From mid-December to mid-January, reported Axios, Ocasio-Cortez generated 14 million interactions (retweets plus likes), twice as many as Sen. Kamala Harris, and almost six times as many as Speaker Nancy Pelosi and Sen. Chuck Schumer. To give you a sense of scale, CNN generated only 3 million interactions in the interval.
He also reveals something interesting about Trump — that he immediately spotted the significance of AOC:
Last August, Trump told Bloomberg News of his first encounter with her in his usual rambling style, and it’s unusually flattering:
“So I’m watching television, and I see this young woman on television. I say, ‘Who’s that?’ ‘Oh, she’s campaigning against Joe.’
“You know who Joe is, right? So Queens. Crowley. So I say, ‘Ah, let me just watch her for a second’— wonderful thing, TiVo. So you go back —‘huh, tell him he’s going to lose.’”
As they say, it takes one to know one.
This morning’s Observer column:
Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.
The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”
Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust…
Insightful piece in The Atlantic:
The most recent controversy provides the perfect metaphor for Trump’s part-symbiotic, part-parasitic relationship with the media: infection. In epidemiology, a virus cannot multiply on its own. First, it must find a host, whose cellular machinery it commandeers to reproduce. For a virus, all distribution—all amplification—is infection.
So it is for Trump. The president’s conspiratorial language is an odious virus that has found a variety of hosts in the U.S. media ecosystem. The traditional news media amplify his words for a variety of reasons, including newsworthiness (he is, after all, the president), easy ratings (cable-news audiences have soared in his term), and old-fashioned peer pressure (the segment producer’s lament: “If everybody else is carrying Trump, shouldn’t we?”).
But a virus doesn’t just borrow a host’s cellular factory to reproduce; it often destroys the host in the process. So, too, does the president seek to destroy the traditional news media that have often amplified his messages…
So why do editors publish headlines which essentially just paraphrase Trump’s tweets? Especially when they know that most readers only read (and remember) the headline.