Good OpEd piece by Charlie Warzel:
Focusing only on moderation means that Facebook, YouTube and other platforms, such as Reddit, don’t have to answer for the ways in which their platforms are meticulously engineered to encourage the creation of incendiary content, rewarding it with eyeballs, likes and, in some cases, ad dollars. Or how that reward system creates a feedback loop that slowly pushes unsuspecting users further down a rabbit hole toward extremist ideas and communities.
On Facebook or Reddit this might mean the ways in which people are encouraged to share propaganda, divisive misinformation or violent images in order to amass likes and shares. It might mean the creation of private communities in which toxic ideologies are allowed to foment, unchecked. On YouTube, the same incentives have created cottage industries of shock jocks and livestreaming communities dedicated to bigotry cloaked in amateur philosophy.
The YouTube personalities and the communities that spring up around the videos become important recruiting tools for the far-right fringes. In some cases, new features like “Super Chat,” which allows viewers to donate to YouTube personalities during livestreams, have become major fund-raising tools for the platform’s worst users — essentially acting as online telethons for white nationalists.
Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.
Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.
Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.
“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.
Note also, that the formulaic Facebook response hasn’t changed either:
After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.
“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”
Ah, yes. That ‘broader look’ again.
This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.
Couldn’t happen to nastier people.
In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.
Interesting Scientific American article by Brett Frischmann and Devan Desai on how — paradoxically — personalised stimuli can produce homogenous responses:
This personalized-input-to-homogenous-output (“PIHO”) dynamic is quite common in the digital networked environment. What type of homogenous output would digital tech companies like to produce? Often, companies describe their objective as “engagement,” and that sounds quite nice, as if users are participating actively in very important activities. But what they mean is much narrower. Engagement usually refers to a narrow set of practices that generate data and revenues for the company, directly or via its network of side agreements with advertisers, data brokers, app developers, AI trainers, governments and so on.
For example, Facebook offers highly personalized services on a platform optimized to produce and reinforce a set of simple responses — scrolling the feed, clicking an ad, posting content, liking or sharing a post. These actions generate data, ad revenue, and sustained attention. It’s not that people always perform the same action; that degree of homogeneity and social control is neither necessary for Facebook’s interests nor our concerns. Rather, for many people much of the time, patterns of behavior conform to “engagement” scripts engineered by Facebook.
The point about what the companies actually regard as ‘user engagement’ is a useful reminder of how tech companies have become consummately adept at Orwellian doublespeak and euphemism. “In our time”, Orwell wrote in “Politics and the English Language”, “political speech and writing are largely the defence of the indefensible.” Well, in our time, we have strategic euphemisms like “the sharing economy”, “user engagement” and “connecting people”.
Terrific FT column by Rana Foroohar. Sample:
If the Facebook revelations prove anything, they show that its top leadership is not liberal, but selfishly libertarian. Political ideals will not get in the way of the company’s efforts to protect its share price. This was made clear by Facebook’s hiring of a rightwing consulting group, Definers Public Affairs, to try and spread misinformation about industry rivals to reporters and to demonise George Soros, who had a pipe bomb delivered to his home. At Davos in January, the billionaire investor made a speech questioning the power of platform technology companies.
Think about that for a minute. This is a company that was so desperate to protect its top leadership and its business model that it hired a shadowy PR firm that used anti-Semitism as a political weapon. Patrick Gaspard, president of the Open Society Foundations, founded by Mr Soros, wrote in a letter last week to Ms Sandberg: “The notion that your company, at your direction”, tried to “discredit people exercising their First Amendment rights to protest Facebook’s role in disseminating vile propaganda is frankly astonishing to me”.
I couldn’t agree more. Ms Sandberg says she didn’t know about the tactics being used by Definers Public Affairs. Mr Zuckerberg says that while he understands “DC type firms” might use such tactics, he doesn’t want them associated with Facebook and has cancelled its contract with Definers.
The irony of that statement could be cut with a knife. Silicon Valley companies are among the nation’s biggest corporate lobbyists. They’ve funded many academics doing research on topics of interest to them, and have made large donations to many powerful politicians…
There is a strange consistency in the cant coming from Zuckerberg and Sandberg as they try to respond to the NYT‘s exhumation of their attempts to avoid responsibility for Facebook’s malignancy. It’s what PR flacks call “plausible deniability”. Time and again, the despicable or ethically-dubious actions taken by Facebook apparently come as a complete surprise to the two at the very top of the company — Zuckerberg and Sandberg. I’m afraid that particular cover story is beginning to look threadbare.
Interesting column by Farhad Manjoo:
Because Apple makes money by selling phones rather than advertising, it has been able to hold itself up as a guardian against a variety of digital plagues: a defender of your privacy, an agitator against misinformation and propaganda, and even a plausible warrior against tech addiction, a problem enabled by the very irresistibility of its own devices.
Though it is already more profitable than any of its rivals, Apple appears likely to emerge even stronger from tech’s season of crisis. In the long run, its growing strength could profoundly alter the industry.
For years, start-ups aiming for consumer audiences modeled themselves on Google and Facebook, offering innovations to the masses at rock-bottom prices, if not for free. But there are limits to the free-lunch model.
If Apple’s more deliberate business becomes the widely followed norm, we could see an industry that is more careful about tech’s dangers and excesses. It could also be one that is more exclusive, where the wealthy get the best innovations and the poor bear more of the risks.
Yep. They wind up as feedstock for surveillance capitalism. The moral of the story: honest business models — in which you pay for what you get — are better. Or, as Manjoo puts it:
The thrust of Apple’s message is simple: Paying directly for technology is the best way to ensure your digital safety, and every fresh danger uncovered online is another reason to invest in the Apple way of life.
The problem is that that particular ‘way of life’ is expensive.
Interesting NYT piece by Kevin Roose in which he points out that the key question about regulating Facebook is not that lawmakers know very little about how it works, but whether they have the political will to regulate it. My hunch is that they don’t, but if they did then the first thing to do would be fix on some clear ideas about what’s wrong with the company.
Here’s the list of possibilities cited by Roose:
- Is it that Facebook is too cavalier about sharing user data with outside organizations?
- Is it that Facebook collects too much data about users in the first place?
- Is it that Facebook is promoting addictive messaging products to children?
- Is it that Facebook’s news feed is polarizing society, pushing people to ideological fringes?
- Is it that Facebook is too easy for political operatives to exploit, or that it does not do enough to keep false news and hate speech off users’ feeds?
- Is it that Facebook is simply too big, or a monopoly that needs to be broken up?
How about: all of the above?
This morning’s Observer column:
Jeremy Paxman, who once served as Newsnight’s answer to the pit-bull terrier, famously outlined his philosophy in interviewing prominent politicians thus: “Why is this lying bastard lying to me?” This was unduly prescriptive: not all of Paxman’s interviewees were outright liars; they were merely practitioners of the art of being “economical with the truth”, but it served as a useful heuristic for a busy interviewer.
Maybe the time has come to apply the same heuristic to Facebook’s public statements…
This morning’s Observer column:
Early in 2009, two former Yahoo employees, Brian Acton and Jan Koum, sat down to try and create a smartphone messaging app. They had a few simple design principles. One was that it should be easy to use: no complicated log-in and authentication procedures; instead, each user would be identified by his or her mobile number. And second, the app should have an honest business model – no more pretending it’s free while covertly monetising users’ data: instead, users would pay $1 a year after a certain period. Searching for a name for their service, they came up with WhatsApp, a play on “What’s Up?”
Total revenue up 47%. Net income up 56%