Fines don’t work. To control tech companies we have to hit them where it really hurts

Today’s Observer comment piece

If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!

This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…

Read on

How Silicon Valley lost its shine

This morning’s Observer column:

Remember the time when tech companies were cool? So do I. Once upon a time, Silicon Valley was the jewel in the American crown, a magnet for high IQ – and predominately male – talent from all over the world. Palo Alto was the centre of what its more delusional inhabitants regarded as the Florence of Renaissance 2.0. Parents swelled with pride when their offspring landed a job with the Googles, Facebooks and Apples of that world, where they stood a sporting chance of becoming as rich as they might have done if they had joined Goldman Sachs or Lehman Brothers, but without the moral odium attendant on investment backing. I mean to say, where else could you be employed by a company to which every president, prime minister and aspirant politician craved an invitation? Where else could you be part of inventing the future?

But that was then and this is now…

Read on

Getting things into perspective

From Zeynep Tufecki:

We don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.

Focussing on the difficulty of ‘moderating’ vile content obscures the real problem

Good OpEd piece by Charlie Warzel:

Focusing only on moderation means that Facebook, YouTube and other platforms, such as Reddit, don’t have to answer for the ways in which their platforms are meticulously engineered to encourage the creation of incendiary content, rewarding it with eyeballs, likes and, in some cases, ad dollars. Or how that reward system creates a feedback loop that slowly pushes unsuspecting users further down a rabbit hole toward extremist ideas and communities.

On Facebook or Reddit this might mean the ways in which people are encouraged to share propaganda, divisive misinformation or violent images in order to amass likes and shares. It might mean the creation of private communities in which toxic ideologies are allowed to foment, unchecked. On YouTube, the same incentives have created cottage industries of shock jocks and livestreaming communities dedicated to bigotry cloaked in amateur philosophy.

The YouTube personalities and the communities that spring up around the videos become important recruiting tools for the far-right fringes. In some cases, new features like “Super Chat,” which allows viewers to donate to YouTube personalities during livestreams, have become major fund-raising tools for the platform’s worst users — essentially acting as online telethons for white nationalists.

Facebook’s targeting engine: still running smoothly on all cylinders

Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

Note also, that the formulaic Facebook response hasn’t changed either:

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Ah, yes. That ‘broader look’ again.

Facebook: the regulatory noose tightens

This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.

Couldn’t happen to nastier people.

In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.

What’s in a name?

On my way to Brussels to chair a discussion on Shoshana Zuboff’s The Age of Surveillance Capitalism I fell to reading Leo Marx’s celebrated essay, ”Technology: The Emergence of a Hazardous Concept”, in which he ponders when — and why — the term ‘technology’ emerged. The term — in its modern sense of “the mechanical arts generally” did not enter public discourse until around 1900 “when a few influential writers, notably Thorstein Veblen and Charles Beard, responding to German usage in the social sciences, accorded technology a pivotal role in shaping modern industrial society.”

Marx thinks that, to a cultural historian, some new terms, when they emerge, serve “as markers, or chronological signposts, of subtle, virtually unremarked, yet ultimately far-reaching changes in culture and society.”

His assumption, he writes,

”is that those changes, whatever they were, created a semantic—indeed, a conceptual—void, which is to say, an awareness of certain novel developments in society and culture for which no adequate name had yet become available. It was this void, presumably, that the word technology, in its new and extended meaning, eventually would fill.”

Which brought me back to musing about Zuboff’s new book and why it (and the two or three major essays of hers that preceded it) came as a flash of illumination. Especially the title. What ‘void’ (to use Marx’s idea) does it fill?

On reflection I think the answer lies in the conceptual vacuity of the terms we have traditionally used to describe the phenomenon of digital technology — in particular the trope of “the Fourth Industrial Revolution” beloved of the Davos crowd, or “the digital era” (passim). For one thing these terms are drenched in technological determinism, implying as they do that it’s the technology and its innate affordances that are driving contemporary history. In that sense these cliches are the spiritual heirs of “the age of Machinery” — Thomas Carlyle’s coinage to describe the industrial revolution of his day.

That’s why ‘Surveillance Capitalism’ represents a conceptual breakthrough. It does not assume that our condition is inexorably determined by the innate affordances of digital technology, but by particular ways in which capitalism has morphed in order to exploit it for its own purposes.

The perniciousness of ‘personalisation’

Interesting Scientific American article by Brett Frischmann and Devan Desai on how — paradoxically — personalised stimuli can produce homogenous responses:

This personalized-input-to-homogenous-output (“PIHO”) dynamic is quite common in the digital networked environment. What type of homogenous output would digital tech companies like to produce? Often, companies describe their objective as “engagement,” and that sounds quite nice, as if users are participating actively in very important activities. But what they mean is much narrower. Engagement usually refers to a narrow set of practices that generate data and revenues for the company, directly or via its network of side agreements with advertisers, data brokers, app developers, AI trainers, governments and so on.

For example, Facebook offers highly personalized services on a platform optimized to produce and reinforce a set of simple responses — scrolling the feed, clicking an ad, posting content, liking or sharing a post. These actions generate data, ad revenue, and sustained attention. It’s not that people always perform the same action; that degree of homogeneity and social control is neither necessary for Facebook’s interests nor our concerns. Rather, for many people much of the time, patterns of behavior conform to “engagement” scripts engineered by Facebook.

The point about what the companies actually regard as ‘user engagement’ is a useful reminder of how tech companies have become consummately adept at Orwellian doublespeak and euphemism. “In our time”, Orwell wrote in “Politics and the English Language”, “political speech and writing are largely the defence of the indefensible.” Well, in our time, we have strategic euphemisms like “the sharing economy”, “user engagement” and “connecting people”.

The end of The End of History man?

From a scarifying review by Stephen Holmes of Francis Fukuyama’s new book, Identity: the Demand for Dignity and the Politics of Resentment:

Fukuyama is right to reject criticism that his first book, The End of History and the Last Man (1992), was an expression of liberal triumphalism. Its gloomy insistence on the spiritual meaninglessness likely to befall late capitalist societies, in which atheist consumers have nothing serious to live for, rules out such breezy optimism. But he did imply, paradoxically, that after the wholly unanticipated collapse of communism there would be no more surprises about “the default form of government for much of the world, at least in aspiration.” What he now sees, but could not have foreseen at the time, was that the high tide of liberal democracy would last a mere fifteen years: “Beginning in the mid-2000s, the momentum toward an increasingly open and liberal world order began to falter, then went into reverse.” Identity politics, he has now concluded, explains why liberal democracy has ceased to impress much of the world as the ideal form of political and social organization.

Fukuyama’s analysis, says Holmes,

is flawed in several ways. Three decades ago, he argued that the human desire for respect and recognition was the driving force behind the universal embrace of liberal democracy. Today, he depicts the human desire for respect and recognition as the driving force behind the repudiation of liberal democracy. The reader’s hope for some account, or even mention, of this extraordinary volte face goes unfulfilled. Nor does Fukuyama squarely address the impossibility of explaining recent ups and downs in the prestige of liberal democracy by invoking an eternal longing of the human soul. What’s more, he fails to consider the possibility that after 1989 the obligation for ex-Communist countries to imitate the West, which was how his End-of-History thesis was put into practice, might itself have been experienced in countries like Hungary and Poland as a source of humiliation and subordination destined to excite antiliberal resentment and an aggressive reassertion of nationalism.

Wow! Great review..