Why the blogosphere matters

This morning’s Observer column:

Last Monday was a significant anniversary in the evolution of the web. It was 25 years to the day since the first serious blog appeared. It was called Scripting News and the url was (and remains) at scripting.com. Its author is a software wizard named Dave Winer, who’s updated it every day since 1994. And despite its wide readership, it has never run ads. This may be partly because Dave doesn’t need the money (he sold his company to Symantec in 1987 for a substantial sum) but it’s mainly because he didn’t want to compete for the attention of his readers. “I see running ads on my blog,” he once wrote, “as picking up loose change that’s fallen out of peoples’ pockets. I want to hit a home run. I’m swinging for the fences. Not picking up litter.”

When some innovators cash out big, as Winer did, they more or less retire – play golf, buy a yacht and generally hang out in luxury. Not so Dave. He has a long string of innovations to his name, including outliner and blogging software, RSS syndication, the outline processor markup language OPML and podcasting, of which he was a pioneer.

And his daily blog at scripting.com continues to be a must-read for anyone interested in the intersection between technology and politics. Winer has a quirky, perceptive, liberal and sometimes contrarian take on just about anything that appears on his radar. He is the nearest thing the web has to an international treasure.

He’s also a reminder of the importance of blogging, a phenomenon that has been overshadowed as social media exploded and sucked much of the oxygen out of our information environment…

Read on

The dark underbelly of social media

My Observer review of Behind the Screen, Sarah T. Roberts’s remarkable exploration of the exploitative world of content ‘moderation’.

The best metaphor for the net is to think of it as a mirror held up to human nature. All human life really is there. There’s no ideology, fetish, behaviour, obsession, perversion, eccentricity or fad that doesn’t find expression somewhere online. And while much of what we see reflected back to us is uplifting, banal, intriguing, harmless or fascinating, some of it is truly awful, for the simple reason that human nature is not only infinitely diverse but also sometimes unspeakably cruel.

In the early days of the internet and, later, the web, this didn’t matter so much. But once cyberspace was captured by a few giant platforms, particularly Google, YouTube, Twitter and Facebook, then it became problematic. The business models of these platforms depended on encouraging people to upload content to them in digital torrents. “Broadcast yourself”, remember, was once the motto of YouTube.

And people did – as they slit the throats of hostages in the deserts of Arabia, raped three-year-old girls, shot an old man in the street, firebombed the villages of ethnic minorities or hanged themselves on camera…

All of which posed a problem for the social media brands, which liked to present themselves as facilitators of creativity, connectivity and good clean fun, an image threatened by the tide of crud that was coming at them. So they started employing people to filter and manage it. They were called “moderators” and for a long time they were kept firmly under wraps, so that nobody knew about them.

That cloak of invisibility began to fray as journalists and scholars started to probe this dark underbelly of social media…

Read on

In the West, Facebook is becoming an older person’s network

This is interesting.

All the bad press about Facebook might be catching up to the company. New numbers from Edison Research show an an estimated 15 million fewer users in the United States compared to 2017. The biggest drop is in the very desirable 12- to 34-year-old group. Marketplace Tech got a first look at Edison’s latest social media research. It revealed almost 80 percent of people in the U.S. are posting, tweeting or snapping, but fewer are going to Facebook.

Farewell iTunes, hello Music

This morning’s Observer column:

Last Monday, at Apple’s Worldwide Developers Conference, the company’s head of software engineering, Craig Federighi, announced that it was terminating iTunes. In one way, the only surprising thing was that Apple had taken so long to reach that decision. It’s been obvious for years that iTunes had become baroquely bloated, a striking anomaly for a company that prides itself on elegant and functional design. So the decision to split the software into three functional units – dealing with music, podcasts and TV apps – seemed both logical and long overdue. But for internet users d’un certain âge (including this columnist) the announcement triggered reflections on personal and tech history.

There’s been music on the internet for a long time. The advent of the compact disc in the early 1980s meant that recorded music went from being analogue to digital. But CD music files were vast – a single CD came in at about 700MB – and for most people, the network was slow. So transferring music from one location to another was not a practical proposition. But then, in 1993, researchers at the Fraunhofer Institute in Germany came up with a way of shrinking audio files by a factor of 10 or more, so that a three-minute music track could be reduced to 3MB without much perceptible loss in quality…

Read on

Lessons of history

From a remarkable essay about Leonardo da Vinci by historian Ian Goldin1 in this weekend’s Financial Times, sadly behind a paywall:

“The third and most vital lesson of the Renaissance is that when things change more quickly, people get left behind more quickly. The Renaissance ended because the first era of global commerce and information revolution led to widening uncertainty and anxiety. The printing revolution provided populists with the means to challenge old authorities and channel the discontent that arose from the highly uneven distribution of the gains and losses from newly globalising commerce and accelerating technological change.

The Renaissance teaches us that progress cannot be taken for granted. The faster things change, the greater of people being left behind. And the greater their anger.

Sound familiar? And then…

Renaissance Florence was famously liberal-minded until a loud demagogue filled in the majority’s silence with rage and bombast. The firebrand preacher Girolamo Savonarola tapped into the fear that citizens felt about the pace of change and growing inequality, as well as the widespread anger toward the rampant corruption of the elite. Seizing on the new capacity for cheap print, he pioneered the political pamphlet, offering his followers the prospect of an afterlife in heaven while their opponents were condemned to hell. His mobilisation of indignation — combined with straightforward thuggery — deposed the Medicis, following which he launched a campaign of public purification, symbolised by the burning of books, cosmetics, jewellery, musical instruments and art, culminating in the 1497 Bonfire of the Vanities”.

Now of course history doesn’t really repeat itself. Still… some of this seems eerily familiar.

WhatsApp groups and Brexit extremism

Charles Arthur has a perceptive piece in the Guardian asking whether WhatsApp is pushing UK MPs towards what Cass Sunstein calls “enclave extremism”.

Barely a week goes by without government ministers or MPs warning Facebook, Twitter, Google, YouTube (a subsidiary of Google), Instagram or WhatsApp (both owned by Facebook) that they must do more to prevent radical or dangerous ideas being spread. A “crackdown” is always just around the corner to protect users from harmful content.

Oddly, MPs never wonder whether they might be victims of the same effects of these tools that they, too, use all the time. Why not, though? We keep hearing that it’s a big problem for people to be repeatedly exposed to radical ideas and outspoken extremists. It’s just that for MPs, those tend to be within their own parties rather than on obscure YouTube channels.

Yep.

Quote of the day

“When it’s impossible to distinguish facts from fraud, actual facts lose their power. Dissidents can end up putting their lives on the line to post a picture documenting wrongdoing only to be faced with an endless stream of deliberately misleading claims: that the picture was taken 10 years ago, that it’s from somewhere else, that it’s been doctored.

As we shift from an era when realistic fakes were expensive and hard to create to one where they’re cheap and easy, we will inevitably adjust our norms. In the past, it often made sense to believe something until it was debunked; in the future, for certain information or claims, it will start making sense to assume they are fake. Unless they are verified.”

Zeynep Tufecki

After the perfect picture, what?

Photography (in the technical rather than aesthetic sense) was once all about the laws of physics — wavelengths of different kinds of light, quality of lenses, refractive indices, coatings, scattering, colour rendition, depth of field, etc.) And initially, when mobile phones started to have cameras, those laws bore down heavily on them: they had plastic lenses and tiny sensors with poor resolution and light-gathering properties. So the pictures they produced might be useful as mementoes, but were of no practical use to anyone interested in the quality of images. And given the constraints of size and cost imposed by the economics of handset manufacture and marketing there seemed to be nothing much that anyone could do about that.

But this view applied only to hardware. The thing we overlooked is that smartphones were rather powerful handheld computers, and it was possible to write software that could augment — or compensate for — the physical limitations of the cameras.

I vividly remember the first time this occurred to me. It was a glorious late afternoon years ago in Provence and we were taking a friend on a drive round the spectacular Gorges du Verdon. About half-way round we stopped for a drink and stood contemplating the amazing views in the blazing sunlight. I reached for my (high-end) digital camera and fruitlessly struggled (by bracketing exposures) to take some photographs that could straddle the impossibly wide dynamic range of the lighting in the scene .

Then, almost as an afterthought, I took out my iPhone, realised that I had downloaded a HDR app, and so used that. The results were flawed in terms of colour balance, but it was clear that the software had been able to manage the dynamic range that had eluded my conventional camera. It was my introduction to what has become known as computational photography — a technology that has come on in leaps and bounds ever since that evening in Provence. Computational photography, as Benedict Evans puts it in a perceptive essay, ”Cameras that Understand”, means that

“as well as trying to make a better lens and sensor, which are subject to the rules of physics and the size of the phone, we use software (now, mostly, machine learning or ‘AI’) to try to get a better picture out of the raw data coming from the hardware. Hence, Apple launched ‘portrait mode’ on a phone with a dual-lens system but uses software to assemble that data into a single refocused image, and it now offers a version of this on a single-lens phone (as did Google when it copied this feature). In the same way, Google’s new Pixel phone has a ‘night sight’ capability that is all about software, not radically different hardware. The technical quality of the picture you see gets better because of new software as much as because of new hardware.” Most of how this is done is already — or soon will be — invisible to the user. Just as HDR used to involve launching a separate app, it’s now baked into many smartphone cameras, which do it automatically. Evans assumes that much the same will happen with the ‘portrait mode’ and ‘night sight’. All that stuff will be baked into later releases of the cameras.

“This will probably”, writes Evans,

also go several levels further in, as the camera goes better at working out what you’re actually taking a picture of. When you take a photo on a ski slope it will come out perfectly exposed and colour-balanced because the camera knows this is snow and adjusts correctly. Today, portrait mode is doing face detection as well as depth mapping to work out what to focus on; in the future, it will know which of the faces in the frame is your child and set the focus on them”. So we’re heading for a point at which one will have to work really hard to take a (technically) imperfect photo. Which leads one to ask: what’s next?

Evans thinks that a clue lies in the fact that people increasingly use their smartphone cameras as visual notebooks — taking pictures of recipes, conference schedules, train timetables, books and stuff we’d like to buy. Machine learning, he surmises, can do a lot with those kinds of images.

”If there’s a date in this picture, what might that mean? Does this look like a recipe? Is there a book in this photo and can we match it to an Amazon listing? Can we match the handbag to Net a Porter? And so you can imagine a suggestion from your phone: “do you want to add the date in this photo to your diary?” in much the same way that today email programs extract flights or meetings or contact details from emails.“

Apparently Google Lens is already doing something like this on Android phones.

Rule #1: stop tweeting. And if you must use Twitter, just lurk

From Farhad Manjoo:

I’ve significantly cut back how much time I spend on Twitter, and — other than to self-servingly promote my articles and engage with my readers — I almost never tweet about the news anymore.

I began pulling back last year — not because I’m morally superior to other journalists but because I worried I was weaker.

I’ve been a Twitter addict since Twitter was founded. For years, I tweeted every ingenious and idiotic thought that came into my head, whenever, wherever; I tweeted from my wedding and during my kids’ births, and there was little more pleasing in life than hanging out on Twitter poring over hot news as it broke.

But Twitter is not that carefree clubhouse for journalism anymore. Instead it is the epicenter of a nonstop information war, an almost comically undermanaged gladiatorial arena where activists and disinformation artists and politicians and marketers gather to target and influence the wider media world.

And journalists should stop paying so much attention to what goes on in this toxic information sewer.