Sleepwalking into dystopia

This morning’s Observer column:

When the history of our time comes to be written, one of the things that will puzzle historians (assuming any have survived the climate cataclysm) is why we allowed ourselves to sleepwalk into dystopia. Ever since 9/11, it’s been clear that western democracies had embarked on a programme of comprehensive monitoring of their citizenry, usually with erratic and inadequate democratic oversight. But we only began to get a fuller picture of the extent of this surveillance when Edward Snowden broke cover in the summer of 2013.

For a time, the dramatic nature of the Snowden revelations focused public attention on the surveillance activities of the state. In consequence, we stopped thinking about what was going on in the private sector. The various scandals of 2016, and the role that network technology played in the political upheavals of that year, constituted a faint alarm call about what was happening, but in general our peaceful slumbers resumed: we went back to our smartphones and the tech giants continued their appropriation, exploitation and abuse of our personal data without hindrance. And this continued even though a host of academic studies and a powerful book by Shoshana Zuboff showed that, as the cybersecurity guru Bruce Schneier put it, “the business model of the internet is surveillance”.

The mystery is why so many of us are still apparently relaxed about what’s going on…

Read on

What comes after Spotify?

Shortly after I wrote Building vs. Streaming in popped an email from Drew Austin, who was musing about what happens when a new product/service fills a void and thereby leads to the decline of whatever filled it beforehand.

Here’s the money quote:

The increasingly-maligned model of VC-funded, loss-leading hypergrowth in the pursuit of market dominance, understood another way, is a quest to create voids that matter, voids that will hurt if we let them emerge by rejecting the product currently filling them (the fissures of a post-WeWork world are at least perceptible now). In the early ‘00s, when Blockbuster died out, it was clear that something better was replacing it (there’s a nostalgic counterargument that I’m tempted to indulge, but let’s just accept this). Today, it’s more common to watch something decline without a replacement that’s clearly better. It’s easy to understand why physical media led to file-sharing and then streaming, but what comes after Netflix and Spotify? Does anyone think it’s likely to be another improvement? I don’t, and the companies’ Facebook-like pursuit of absolute ubiquity is why. Unlike the immediately-filled Blockbuster void, I fear the Spotify void. I already got rid of all my CDs. The residue of buildings and cities determines what gets built on top of them, and if we’re conscientious, we’ll build with a more distant future in mind.

The myth of American competitiveness

Most of the complacent guff about how American capitalism is better than its counterparts in other parts of the world is just that — guff.

The economist Thomas Philippon has done a terrific, data-intensive demolition job on the myth. In The Great Reversal: How America Gave Up on Free Markets he shows that America is no longer the spiritual home of the free-market economy (any more than Westminster is now “the mother of Parliaments”). Competition there is not fiercer than it is in ‘old’ Europe. Its regulators have been asleep at the wheel for decades and its latest crop of giant companies are not all that different from their predecessors.

Or, as he puts it:

”First, US markets have become less competitive: concentration is high in many industries, leaders are entrenched, and their profit rates are excessive. Second, this lack of competition has hurt consumers and workers: it has led to higher prices, lower investment and lower productivity growth. Third, and contrary to popular wisdom, the main explanation is political, not technological: I have traced the decrease in competition to increasing barriers to entry and weak antitrust enforcement, sustained by heavy lobbying and campaign contributions.”

So next time some tech evangelist starts to rant on about how backward Europe is, the appropriate reply is: give me a break.

The dark underbelly of social media

My Observer review of Behind the Screen, Sarah T. Roberts’s remarkable exploration of the exploitative world of content ‘moderation’.

The best metaphor for the net is to think of it as a mirror held up to human nature. All human life really is there. There’s no ideology, fetish, behaviour, obsession, perversion, eccentricity or fad that doesn’t find expression somewhere online. And while much of what we see reflected back to us is uplifting, banal, intriguing, harmless or fascinating, some of it is truly awful, for the simple reason that human nature is not only infinitely diverse but also sometimes unspeakably cruel.

In the early days of the internet and, later, the web, this didn’t matter so much. But once cyberspace was captured by a few giant platforms, particularly Google, YouTube, Twitter and Facebook, then it became problematic. The business models of these platforms depended on encouraging people to upload content to them in digital torrents. “Broadcast yourself”, remember, was once the motto of YouTube.

And people did – as they slit the throats of hostages in the deserts of Arabia, raped three-year-old girls, shot an old man in the street, firebombed the villages of ethnic minorities or hanged themselves on camera…

All of which posed a problem for the social media brands, which liked to present themselves as facilitators of creativity, connectivity and good clean fun, an image threatened by the tide of crud that was coming at them. So they started employing people to filter and manage it. They were called “moderators” and for a long time they were kept firmly under wraps, so that nobody knew about them.

That cloak of invisibility began to fray as journalists and scholars started to probe this dark underbelly of social media…

Read on

Fines don’t work. To control tech companies we have to hit them where it really hurts

Today’s Observer comment piece

If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!

This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…

Read on

How Silicon Valley lost its shine

This morning’s Observer column:

Remember the time when tech companies were cool? So do I. Once upon a time, Silicon Valley was the jewel in the American crown, a magnet for high IQ – and predominately male – talent from all over the world. Palo Alto was the centre of what its more delusional inhabitants regarded as the Florence of Renaissance 2.0. Parents swelled with pride when their offspring landed a job with the Googles, Facebooks and Apples of that world, where they stood a sporting chance of becoming as rich as they might have done if they had joined Goldman Sachs or Lehman Brothers, but without the moral odium attendant on investment backing. I mean to say, where else could you be employed by a company to which every president, prime minister and aspirant politician craved an invitation? Where else could you be part of inventing the future?

But that was then and this is now…

Read on

Getting things into perspective

From Zeynep Tufecki:

We don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.

Focussing on the difficulty of ‘moderating’ vile content obscures the real problem

Good OpEd piece by Charlie Warzel:

Focusing only on moderation means that Facebook, YouTube and other platforms, such as Reddit, don’t have to answer for the ways in which their platforms are meticulously engineered to encourage the creation of incendiary content, rewarding it with eyeballs, likes and, in some cases, ad dollars. Or how that reward system creates a feedback loop that slowly pushes unsuspecting users further down a rabbit hole toward extremist ideas and communities.

On Facebook or Reddit this might mean the ways in which people are encouraged to share propaganda, divisive misinformation or violent images in order to amass likes and shares. It might mean the creation of private communities in which toxic ideologies are allowed to foment, unchecked. On YouTube, the same incentives have created cottage industries of shock jocks and livestreaming communities dedicated to bigotry cloaked in amateur philosophy.

The YouTube personalities and the communities that spring up around the videos become important recruiting tools for the far-right fringes. In some cases, new features like “Super Chat,” which allows viewers to donate to YouTube personalities during livestreams, have become major fund-raising tools for the platform’s worst users — essentially acting as online telethons for white nationalists.

Facebook’s targeting engine: still running smoothly on all cylinders

Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

Note also, that the formulaic Facebook response hasn’t changed either:

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Ah, yes. That ‘broader look’ again.