Facebook’s biggest ethical dilemma: unwillingness to acknowledge that it has one

There are really only two possible explanations for the crisis now beginning to engulf Facebook. One is that the company’s founder was — and perhaps still is — a smart but profoundly naive individual who knows little about the world or about human behaviour. The other is that he is — how shall I put it? — a sociopath, indifferent to what happens to people so long as his empire continues to grow.

I prefer the former explanation, but sometimes one wonders…

Consider Free Basics — the program to bring Internet access to millions of people in poor countries. It works by having Facebook pre-installed on cheap smartphones together with deals with local mobile networks that traffic to the Facebook app will not incur any data charges.

The cynical interpretation of this is that it’s a way of furthering Zuckerberg’s goal of replacing the Internet with Facebook, creating the ultimate global walled garden. The charitable spin is the one Zuckerberg himself put on it — that Free Basics provides a way to connect people who would otherwise never go online.

Either way, the effects were predictable: new users in these countries think that Facebook is the Internet; and Facebook becomes the major channel for news. The NYT has a sobering report on what happened in Myanmar, where Facebook now has millions of users.

“Facebook has become sort of the de facto internet for Myanmar,” said Jes Kaliebe Petersen, chief executive of Phandeeyar, Myanmar’s leading technology hub that helped Facebook create its Burmese-language community standards page. “When people buy their first smartphone, it just comes preinstalled.”

But since the company took no editorial responsibility for what people used its service for, when it transpired that it was being used to stir up ethnic hatred and worse, it seemed unable to spot what was happening. “Facebook”, reports the Times,

has become a breeding ground for hate speech and virulent posts about the Rohingya. And because of Facebook’s design, posts that are shared and liked more frequently get more prominent placement in feeds, favoring highly partisan content in timelines.

Ashin Wirathu, the monk, has hundreds of thousands of followers on Facebook accounts in Burmese and English. His posts include graphic photos and videos of decaying bodies that Ashin Wirathu says are Buddhist victims of Rohingya attacks, or posts denouncing the minority ethnic group or updates that identify them falsely as “Bengali” foreigners.

It’s the same story as everywhere else that Facebook has touched. A company that built a money-making advertising machine which gets its revenues from monetising user activity finds that sometimes that activity is very unsavoury and inhumane. And when this is finally realised, it finds itself caught between a rock and a hard place, unwilling to accept responsibility from the unintended consequences of its wealth-generating machine.

What’s coming in IoS 11

Ah, at last something interesting:

In September, Apple will release new changes to Safari with iOS 11 called “Intelligent Tracking Prevention.” These changes will have large effects on the ad tech industry and create new winners and losers.

In short, the iOS 11 changes will really help the big guys, are neutral to the small guys and significantly hurt the mid-size guys.

Hmmm… Not sure that this will be a boon to the world (c.f. the stuff about helping the big guys). I’ll continue to use my own protective measures.

Zuckerberg’s virtual world

This morning’s Observer column:

On Thursday 16 February, Mark Zuckerberg, the founder and supreme leader of Facebook, the world’s most populous virtual country (population 2bn) published an epistle to his 89m disciple-followers. “Building Global Community” was the headline. “On our journey to connect the world,” the supreme leader began, “we often discuss products we’re building and updates on our business. Today I want to focus on the most important question of all: are we building the world we all want?”

Good question. But wait a minute, who’s the “we” here? It crops up 156 times in the 5,700-word epistle…

Read on

Facebook: the Psychopaths ‘R Us channel

Yesterday’s Observer column:

The old adage “be careful what you wish for” comes to mind. A while back, Facebook launched Facebook Live, a service that enables its users to broadcast live video to the world. Shortly after the service was activated, the company’s founder and CEO, Mark Zuckerberg, said that the service would support all the “personal and emotional and raw and visceral” ways that people communicate. Users were encouraged to “go live” in casual settings – waiting for baggage at the airport, for example, or eating at a restaurant.

Note the phrase “raw and visceral”. Facebook Live has already broadcast a live stream of a young disabled man being tied up, gagged and attacked with a knife. In March, two Chicago teenage boys live-streamed themselves gang-raping a teenage girl. And around 40 Facebook users watched the video without reporting it either to Facebook or the police.

That’s pretty raw and visceral, you might think. But it turns out that it was just a prelude…

Read on

Digital realities

From an interesting NYT piece on how Google is coining money by allowing firms to put product information in the space immediately below the search bar.

Product ads that appeal to shoppers are also strategically important because consumers are starting their online shopping at Amazon.com. Last year, a survey of 2,000 American shoppers found that 55 percent turn to Amazon first when searching for a product, while only 28 percent start with a web search.

YouTube, dodgy content and the advertising business

There’s much hoo-hah about major corporations suddenly being scandalised by the discovery that their digital advertising appears alongside all kinds of objectionable YouTube videos. A sceptic might ask: what took them so long to realise this? One answer might be that most corporate executives don’t ever look at the kind of stuff on YouTube that kids watch. But, anyway, now they have discovered what’s been going on and they are reacting like scandalised Victorian spinsters who have just caught a glimpse of a naked ankle.

And they’re withdrawing their ads. As the NYT reports this morning:

But the technology underpinning YouTube’s advertising business has come under intense scrutiny in recent days, with AT&T, Johnson & Johnson and other deep-pocketed marketers announcing that they would pull their ads from the service. Their reason: The automated system in which ads are bought and placed online has too often resulted in brands appearing next to offensive material on YouTube such as hate speech.

On Thursday, the ride-sharing service Lyft became the latest example, removing their ads after they appeared next to videos from a racist skinhead group.

Google, sensing that this is going to turn into a PR nightmare, is responding in textbook fashion. Soothing noises, promises of technological fixes, plus gentle chiding of advertisers for apparently not being aware that they are not using the ‘tools’ that Google provides for decreasing the likelihood of their ads being seen in undesirable company.

The big question, of course, is whether this fuss is likely to do real damage to the company. Lots of investment analysts are crawling all over this. Here’s one assessment from RBC Capital Markets:

We built a very simple analysis based on the following assumptions: YT [YouTube] is roughly $14B of Revenue in 2017, and GDN [Google Display Network] is $4.8B (GOOG does not disclose so these are very rough ests); GDN pays 70% TAC; YT & GDN have roughly the same Net margins as our estimates for Alphabet as a whole (26.7% GAAP NI margins on Net Revenue). With this as a backdrop, a 10% hit to YT and GDN revenue in 2017 would be a 1.7% reduction to Net Revenue / GAAP EPS ($1.5B / $0.59) and a 2% hit would be 0.3% reduction ($309MM / $0.12).

If that’s right, this controversy is relatively small beer as far as Google is concerned. My conclusion: expect more soothing PR-speak and very little action.

Don’t let WhatsApp nudge you into sharing your data with Facebook

This morning’s Observer column:

When WhatsApp, the messaging app, launched in 2009, it struck me as one of the most interesting innovations I’d seen in ages – for two reasons. The first was that it seemed beautifully designed from the outset: it was clean, minimalist and efficient; and, secondly, it had a business model that did not depend on advertising. Instead, users got a year free, after which they paid a modest annual subscription.

Better still, the co-founder Jan Koum, seemed to have a very healthy aversion to the surveillance capitalism that underpins the vast revenues of Google, Facebook and co, in which they extract users’ personal data without paying for it, and then refine and sell it to advertisers…

Ah yes. That was then. But now…

Read on

An hour a day

From today’s New York Times:

Facebook reported dazzling first quarter results last week: Net income nearly tripled to $1.5 billion, and monthly active users hit a record 1.65 billion. But it’s a much smaller number that leapt out at me.

Fifty minutes.

That’s the average amount of time, the company said, that users spend each day on its Facebook, Instagram and Messenger platforms (and that’s not counting the popular messaging app WhatsApp).

Maybe that doesn’t sound like so much. But there are only 24 hours in a day, and the average person sleeps for 8.8 of them. That means more than one-sixteenth of the average user’s waking time is spent on Facebook.

The average time that users spend on Facebook is nearing an hour…

But it’s not enough, because surveillance capitalism requires an ever-expanding amount of data.