Facebook’s new gateway drug for kids

This morning’s Observer column:

In one of those coincidences that give irony a bad name, Facebook launched a new service for children at the same time that a moral panic was sweeping the UK about the dangers of children using live-streaming apps that enable anyone to broadcast video directly from a smartphone or a tablet. The BBC showed a scary example of what can happen. A young woman who works as an internet safety campaigner posed as a 14-year-old girl to find out what occurs when a young female goes online using one of these streaming services…

Read on

Facebook’s biggest ethical dilemma: unwillingness to acknowledge that it has one

There are really only two possible explanations for the crisis now beginning to engulf Facebook. One is that the company’s founder was — and perhaps still is — a smart but profoundly naive individual who knows little about the world or about human behaviour. The other is that he is — how shall I put it? — a sociopath, indifferent to what happens to people so long as his empire continues to grow.

I prefer the former explanation, but sometimes one wonders…

Consider Free Basics — the program to bring Internet access to millions of people in poor countries. It works by having Facebook pre-installed on cheap smartphones together with deals with local mobile networks that traffic to the Facebook app will not incur any data charges.

The cynical interpretation of this is that it’s a way of furthering Zuckerberg’s goal of replacing the Internet with Facebook, creating the ultimate global walled garden. The charitable spin is the one Zuckerberg himself put on it — that Free Basics provides a way to connect people who would otherwise never go online.

Either way, the effects were predictable: new users in these countries think that Facebook is the Internet; and Facebook becomes the major channel for news. The NYT has a sobering report on what happened in Myanmar, where Facebook now has millions of users.

“Facebook has become sort of the de facto internet for Myanmar,” said Jes Kaliebe Petersen, chief executive of Phandeeyar, Myanmar’s leading technology hub that helped Facebook create its Burmese-language community standards page. “When people buy their first smartphone, it just comes preinstalled.”

But since the company took no editorial responsibility for what people used its service for, when it transpired that it was being used to stir up ethnic hatred and worse, it seemed unable to spot what was happening. “Facebook”, reports the Times,

has become a breeding ground for hate speech and virulent posts about the Rohingya. And because of Facebook’s design, posts that are shared and liked more frequently get more prominent placement in feeds, favoring highly partisan content in timelines.

Ashin Wirathu, the monk, has hundreds of thousands of followers on Facebook accounts in Burmese and English. His posts include graphic photos and videos of decaying bodies that Ashin Wirathu says are Buddhist victims of Rohingya attacks, or posts denouncing the minority ethnic group or updates that identify them falsely as “Bengali” foreigners.

It’s the same story as everywhere else that Facebook has touched. A company that built a money-making advertising machine which gets its revenues from monetising user activity finds that sometimes that activity is very unsavoury and inhumane. And when this is finally realised, it finds itself caught between a rock and a hard place, unwilling to accept responsibility from the unintended consequences of its wealth-generating machine.

The education of Mark Zuckerberg

This morning’s Observer column:

One of my favourite books is The Education of Henry Adams (published in 1918). It’s an extended meditation, written in old age by a scion of one of Boston’s elite families, on how the world had changed in his lifetime, and how his formal education had not prepared him for the events through which he had lived. This education had been grounded in the classics, history and literature, and had rendered him incapable, he said, of dealing with the impact of science and technology.

Re-reading Adams recently left me with the thought that there is now an opening for a similar book, The Education of Mark Zuckerberg. It would have an analogous theme, namely how the hero’s education rendered him incapable of understanding the world into which he was born. For although he was supposed to be majoring in psychology at Harvard, the young Zuckerberg mostly took computer science classes until he started Facebook and dropped out. And it turns out that this half-baked education has left him bewildered and rudderless in a culturally complex and politically polarised world…

Read on

MadMen 2.0: The anthropology of the political

Gillian Tett, who is now the US Editor of the Financial Times, was trained as an anthropologist (which may be one reason why she spotted the fishy world of Collateral Debt Obligations and other dodgy derivatives before specialists who covered the banking sector). She had some interesting reflections in last weekend’s FT about data-driven campaigning in the 2016 Presidential election.

These were based on visits she had paid to the data-mavens of the Trump and Clinton campaigns during the election, and came away with some revealing insights into how they had taken completely different views on what constituted ‘politics’.

“Until now”, she writes,

”whenever pollsters have been asked to do research on politics, they have generally focussed on the things that modern western society labels ‘political’ — such as voter registration, policy surveys, party affiliation, voting records, and so on”. Broadly speaking, this is the way Clinton’s data team viewed the electorate. They had a vast database based on past voting patterns, voter registration and affiliations that was much more comprehensive than anything the Trump crowd had. “But”, says Tett, “this database was backwards-looking and limited to ‘politics’”. And Clinton’s data scientists thought that politics began and ended with ‘politics’.

The Trump crowd (which seems mainly to have been Cambridge Analytica, a strange outfit that is part hype-machine and part applied-psychometrics), took a completely different approach. As one of their executives told Tett,

”Enabling somebody and encouraging somebody to go out and vote on a wet Wednesday morning is no different in my mind to persuading and encouraging somebody to move from one toothpaste brand to another.” The task was, he said, “about understanding what message is relevant to that person at that time when they are in that particular mindset”.

This goes to the heart of what happened, in a way. It turned out that a sophisticated machine built for targeting finely-calibrated commercial messages to particular consumers was also suitable for delivering calibrated political messages to targeted voters. And I suppose that shouldn’t have come as such a shock. After all, when TV first appeared, all of the expertise and resources of Madison Avenue’s “hidden persuaders” was brought to bear on political campaigning. So what we’re seeing now is just Mad Men 2.0.

How to be smart and clueless at the same time

Mark Zuckerberg’s ‘defence’ of Facebook’s role in the election of Trump provides a vivid demonstration of how someone can have a very high IQ and yet be completely clueless — as Zeynep Tufecki points out in a splendid NYT OpEd piece:

Mr. Zuckerberg’s preposterous defense of Facebook’s failure in the 2016 presidential campaign is a reminder of a structural asymmetry in American politics. It’s true that mainstream news outlets employ many liberals, and that this creates some systemic distortions in coverage (effects of trade policies on lower-income workers and the plight of rural America tend to be underreported, for example). But bias in the digital sphere is structurally different from that in mass media, and a lot more complicated than what programmers believe.

In a largely automated platform like Facebook, what matters most is not the political beliefs of the employees but the structures, algorithms and incentives they set up, as well as what oversight, if any, they employ to guard against deception, misinformation and illegitimate meddling. And the unfortunate truth is that by design, business model and algorithm, Facebook has made it easy for it to be weaponized to spread misinformation and fraudulent content. Sadly, this business model is also lucrative, especially during elections. Sheryl Sandberg, Facebook’s chief operating officer, called the 2016 election “a big deal in terms of ad spend” for the company, and it was. No wonder there has been increasing scrutiny of the platform.

Zuckerberg’s Frankenstein problem

Nice Buzzfeed piece by xxx about whether Zuck is really in charge, despite his controlling shares. Here’s the nub:

Facebook’s response to accusations about its role in the 2016 election since Nov. 9 bears this out, most notably Zuckerberg’s public comments immediately following the election that the claim that fake news influenced the US presidential election was “a pretty crazy idea.” In April, when Facebook released a white paper detailing the results of its investigation into fake news on its platform during the election, the company insisted it did not know the identity of the malicious actors using its network. And after recent revelations that Facebook had discovered Russian ads on its platform, the company maintained that as of April 2017, it was unaware of any Russian involvement. “When asked we said there was no evidence of Russian ads. That was true at the time,” Facebook told Mashable earlier this month.

Some critics of Facebook speak about the company’s leadership almost like an authoritarian government — a sovereign entity with virtually unchecked power and domineering ambition. So much so, in fact, that Zuckerberg is now frequently mentioned as a possible presidential candidate despite his public denials. But perhaps a better comparison might be the United Nations — a group of individuals endowed with the almost impossible responsibility of policing a network of interconnected autonomous powers. Just take Zuckerberg’s statement this week, in which he sounded strikingly like an embattled secretary-general: “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion,” he said.

Nice metaphor, this.

The Technical is Political

This morning’s Observer column:

In his wonderful book The Swerve: How the Renaissance Began, the literary historian Stephen Greenblatt traces the origins of the Renaissance back to the rediscovery of a 2,000-year-old poem by Lucretius, De Rerum Natura (On the Nature of Things). The book is a riveting explanation of how a huge cultural shift can ultimately spring from faint stirrings in the undergrowth.

Professor Greenblatt is probably not interested in the giant corporations that now dominate our world, but I am, and in the spirit of The Swerve I’ve been looking for signs that big changes might be on the way. You don’t have to dig very deep to find them…

Read on

Facebook meets irresistible force

Terrific blog post by Josh Marshall:

I believe what we’re seeing here is a convergence of two separate but highly charged news streams and political moments. On the one hand, you have the Russia probe, with all that is tied to that investigation. On another, you have the rising public backlash against Big Tech, the various threats it arguably poses and its outsized power in the American economy and American public life. A couple weeks ago, I wrote that after working with Google in various capacities for more than a decade I’d observed that Google is, institutionally, so accustomed to its customers actually being its products that when it gets into lines of business where its customers are really customers it really doesn’t know how to deal with them. There’s something comparable with Facebook.

Facebook is so accustomed to treating its ‘internal policies’ as though they were something like laws that they appear to have a sort of blind spot that prevents them from seeing how ridiculous their resistance sounds. To use the cliche, it feels like a real shark jumping moment. As someone recently observed, Facebook’s ‘internal policies’ are crafted to create the appearance of civic concerns for privacy, free speech, and other similar concerns. But they’re actually just a business model. Facebook’s ‘internal policies’ amount to a kind of Stepford Wives version of civic liberalism and speech and privacy rights, the outward form of the things preserved while the innards have been gutted and replaced by something entirely different, an aggressive and totalizing business model which in many ways turns these norms and values on their heads. More to the point, most people have the experience of Facebook’s ‘internal policies’ being meaningless in terms of protecting their speech or privacy or whatever as soon as they bump up against Facebook’s business model.

Spot on. Especially the Stepford Wives metaphor.

Why fake news will be hard to fix — it’s the users, stoopid

Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.

This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.

Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.

If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?

If you want to understand why fake news will be a hard problem to crack, this is a good place to start.