Hypocrisy on stilts

Terrific FT column by Rana Foroohar. Sample:

If the Facebook revelations prove anything, they show that its top leadership is not liberal, but selfishly libertarian. Political ideals will not get in the way of the company’s efforts to protect its share price. This was made clear by Facebook’s hiring of a rightwing consulting group, Definers Public Affairs, to try and spread misinformation about industry rivals to reporters and to demonise George Soros, who had a pipe bomb delivered to his home. At Davos in January, the billionaire investor made a speech questioning the power of platform technology companies.

Think about that for a minute. This is a company that was so desperate to protect its top leadership and its business model that it hired a shadowy PR firm that used anti-Semitism as a political weapon. Patrick Gaspard, president of the Open Society Foundations, founded by Mr Soros, wrote in a letter last week to Ms Sandberg: “The notion that your company, at your direction”, tried to “discredit people exercising their First Amendment rights to protest Facebook’s role in disseminating vile propaganda is frankly astonishing to me”.

I couldn’t agree more. Ms Sandberg says she didn’t know about the tactics being used by Definers Public Affairs. Mr Zuckerberg says that while he understands “DC type firms” might use such tactics, he doesn’t want them associated with Facebook and has cancelled its contract with Definers.

The irony of that statement could be cut with a knife. Silicon Valley companies are among the nation’s biggest corporate lobbyists. They’ve funded many academics doing research on topics of interest to them, and have made large donations to many powerful politicians…

There is a strange consistency in the cant coming from Zuckerberg and Sandberg as they try to respond to the NYT‘s exhumation of their attempts to avoid responsibility for Facebook’s malignancy. It’s what PR flacks call “plausible deniability”. Time and again, the despicable or ethically-dubious actions taken by Facebook apparently come as a complete surprise to the two at the very top of the company — Zuckerberg and Sandberg. I’m afraid that particular cover story is beginning to look threadbare.

The benefits of having an honest business model

Interesting column by Farhad Manjoo:

Because Apple makes money by selling phones rather than advertising, it has been able to hold itself up as a guardian against a variety of digital plagues: a defender of your privacy, an agitator against misinformation and propaganda, and even a plausible warrior against tech addiction, a problem enabled by the very irresistibility of its own devices.

Though it is already more profitable than any of its rivals, Apple appears likely to emerge even stronger from tech’s season of crisis. In the long run, its growing strength could profoundly alter the industry.

For years, start-ups aiming for consumer audiences modeled themselves on Google and Facebook, offering innovations to the masses at rock-bottom prices, if not for free. But there are limits to the free-lunch model.

If Apple’s more deliberate business becomes the widely followed norm, we could see an industry that is more careful about tech’s dangers and excesses. It could also be one that is more exclusive, where the wealthy get the best innovations and the poor bear more of the risks.

Yep. They wind up as feedstock for surveillance capitalism. The moral of the story: honest business models — in which you pay for what you get — are better. Or, as Manjoo puts it:

The thrust of Apple’s message is simple: Paying directly for technology is the best way to ensure your digital safety, and every fresh danger uncovered online is another reason to invest in the Apple way of life.

The problem is that that particular ‘way of life’ is expensive.

So what’s the problem with Facebook?

Interesting NYT piece by Kevin Roose in which he points out that the key question about regulating Facebook is not that lawmakers know very little about how it works, but whether they have the political will to regulate it. My hunch is that they don’t, but if they did then the first thing to do would be fix on some clear ideas about what’s wrong with the company.

Here’s the list of possibilities cited by Roose:

  • Is it that Facebook is too cavalier about sharing user data with outside organizations?
  • Is it that Facebook collects too much data about users in the first place?
  • Is it that Facebook is promoting addictive messaging products to children?
  • Is it that Facebook’s news feed is polarizing society, pushing people to ideological fringes?
  • Is it that Facebook is too easy for political operatives to exploit, or that it does not do enough to keep false news and hate speech off users’ feeds?
  • Is it that Facebook is simply too big, or a monopoly that needs to be broken up?

How about: all of the above?

Facebook and the CCTV effect

This morning’s Observer column:

Jeremy Paxman, who once served as Newsnight’s answer to the pit-bull terrier, famously outlined his philosophy in interviewing prominent politicians thus: “Why is this lying bastard lying to me?” This was unduly prescriptive: not all of Paxman’s interviewees were outright liars; they were merely practitioners of the art of being “economical with the truth”, but it served as a useful heuristic for a busy interviewer.

Maybe the time has come to apply the same heuristic to Facebook’s public statements…

Read on

Why is WhatsApp founder quitting Facebook? You can guess the answer

This morning’s Observer column:

Early in 2009, two former Yahoo employees, Brian Acton and Jan Koum, sat down to try and create a smartphone messaging app. They had a few simple design principles. One was that it should be easy to use: no complicated log-in and authentication procedures; instead, each user would be identified by his or her mobile number. And second, the app should have an honest business model – no more pretending it’s free while covertly monetising users’ data: instead, users would pay $1 a year after a certain period. Searching for a name for their service, they came up with WhatsApp, a play on “What’s Up?”

Read on

MadMen 2.0: The anthropology of the political

Gillian Tett, who is now the US Editor of the Financial Times, was trained as an anthropologist (which may be one reason why she spotted the fishy world of Collateral Debt Obligations and other dodgy derivatives before specialists who covered the banking sector). She had some interesting reflections in last weekend’s FT about data-driven campaigning in the 2016 Presidential election.

These were based on visits she had paid to the data-mavens of the Trump and Clinton campaigns during the election, and came away with some revealing insights into how they had taken completely different views on what constituted ‘politics’.

“Until now”, she writes,

”whenever pollsters have been asked to do research on politics, they have generally focussed on the things that modern western society labels ‘political’ — such as voter registration, policy surveys, party affiliation, voting records, and so on”. Broadly speaking, this is the way Clinton’s data team viewed the electorate. They had a vast database based on past voting patterns, voter registration and affiliations that was much more comprehensive than anything the Trump crowd had. “But”, says Tett, “this database was backwards-looking and limited to ‘politics’”. And Clinton’s data scientists thought that politics began and ended with ‘politics’.

The Trump crowd (which seems mainly to have been Cambridge Analytica, a strange outfit that is part hype-machine and part applied-psychometrics), took a completely different approach. As one of their executives told Tett,

”Enabling somebody and encouraging somebody to go out and vote on a wet Wednesday morning is no different in my mind to persuading and encouraging somebody to move from one toothpaste brand to another.” The task was, he said, “about understanding what message is relevant to that person at that time when they are in that particular mindset”.

This goes to the heart of what happened, in a way. It turned out that a sophisticated machine built for targeting finely-calibrated commercial messages to particular consumers was also suitable for delivering calibrated political messages to targeted voters. And I suppose that shouldn’t have come as such a shock. After all, when TV first appeared, all of the expertise and resources of Madison Avenue’s “hidden persuaders” was brought to bear on political campaigning. So what we’re seeing now is just Mad Men 2.0.

How to be smart and clueless at the same time

Mark Zuckerberg’s ‘defence’ of Facebook’s role in the election of Trump provides a vivid demonstration of how someone can have a very high IQ and yet be completely clueless — as Zeynep Tufecki points out in a splendid NYT OpEd piece:

Mr. Zuckerberg’s preposterous defense of Facebook’s failure in the 2016 presidential campaign is a reminder of a structural asymmetry in American politics. It’s true that mainstream news outlets employ many liberals, and that this creates some systemic distortions in coverage (effects of trade policies on lower-income workers and the plight of rural America tend to be underreported, for example). But bias in the digital sphere is structurally different from that in mass media, and a lot more complicated than what programmers believe.

In a largely automated platform like Facebook, what matters most is not the political beliefs of the employees but the structures, algorithms and incentives they set up, as well as what oversight, if any, they employ to guard against deception, misinformation and illegitimate meddling. And the unfortunate truth is that by design, business model and algorithm, Facebook has made it easy for it to be weaponized to spread misinformation and fraudulent content. Sadly, this business model is also lucrative, especially during elections. Sheryl Sandberg, Facebook’s chief operating officer, called the 2016 election “a big deal in terms of ad spend” for the company, and it was. No wonder there has been increasing scrutiny of the platform.

Facebook meets irresistible force

Terrific blog post by Josh Marshall:

I believe what we’re seeing here is a convergence of two separate but highly charged news streams and political moments. On the one hand, you have the Russia probe, with all that is tied to that investigation. On another, you have the rising public backlash against Big Tech, the various threats it arguably poses and its outsized power in the American economy and American public life. A couple weeks ago, I wrote that after working with Google in various capacities for more than a decade I’d observed that Google is, institutionally, so accustomed to its customers actually being its products that when it gets into lines of business where its customers are really customers it really doesn’t know how to deal with them. There’s something comparable with Facebook.

Facebook is so accustomed to treating its ‘internal policies’ as though they were something like laws that they appear to have a sort of blind spot that prevents them from seeing how ridiculous their resistance sounds. To use the cliche, it feels like a real shark jumping moment. As someone recently observed, Facebook’s ‘internal policies’ are crafted to create the appearance of civic concerns for privacy, free speech, and other similar concerns. But they’re actually just a business model. Facebook’s ‘internal policies’ amount to a kind of Stepford Wives version of civic liberalism and speech and privacy rights, the outward form of the things preserved while the innards have been gutted and replaced by something entirely different, an aggressive and totalizing business model which in many ways turns these norms and values on their heads. More to the point, most people have the experience of Facebook’s ‘internal policies’ being meaningless in terms of protecting their speech or privacy or whatever as soon as they bump up against Facebook’s business model.

Spot on. Especially the Stepford Wives metaphor.