Total revenue up 47%. Net income up 56%
Gillian Tett, who is now the US Editor of the Financial Times, was trained as an anthropologist (which may be one reason why she spotted the fishy world of Collateral Debt Obligations and other dodgy derivatives before specialists who covered the banking sector). She had some interesting reflections in last weekend’s FT about data-driven campaigning in the 2016 Presidential election.
These were based on visits she had paid to the data-mavens of the Trump and Clinton campaigns during the election, and came away with some revealing insights into how they had taken completely different views on what constituted ‘politics’.
“Until now”, she writes,
”whenever pollsters have been asked to do research on politics, they have generally focussed on the things that modern western society labels ‘political’ — such as voter registration, policy surveys, party affiliation, voting records, and so on”. Broadly speaking, this is the way Clinton’s data team viewed the electorate. They had a vast database based on past voting patterns, voter registration and affiliations that was much more comprehensive than anything the Trump crowd had. “But”, says Tett, “this database was backwards-looking and limited to ‘politics’”. And Clinton’s data scientists thought that politics began and ended with ‘politics’.
The Trump crowd (which seems mainly to have been Cambridge Analytica, a strange outfit that is part hype-machine and part applied-psychometrics), took a completely different approach. As one of their executives told Tett,
”Enabling somebody and encouraging somebody to go out and vote on a wet Wednesday morning is no different in my mind to persuading and encouraging somebody to move from one toothpaste brand to another.” The task was, he said, “about understanding what message is relevant to that person at that time when they are in that particular mindset”.
This goes to the heart of what happened, in a way. It turned out that a sophisticated machine built for targeting finely-calibrated commercial messages to particular consumers was also suitable for delivering calibrated political messages to targeted voters. And I suppose that shouldn’t have come as such a shock. After all, when TV first appeared, all of the expertise and resources of Madison Avenue’s “hidden persuaders” was brought to bear on political campaigning. So what we’re seeing now is just Mad Men 2.0.
Mark Zuckerberg’s ‘defence’ of Facebook’s role in the election of Trump provides a vivid demonstration of how someone can have a very high IQ and yet be completely clueless — as Zeynep Tufecki points out in a splendid NYT OpEd piece:
Mr. Zuckerberg’s preposterous defense of Facebook’s failure in the 2016 presidential campaign is a reminder of a structural asymmetry in American politics. It’s true that mainstream news outlets employ many liberals, and that this creates some systemic distortions in coverage (effects of trade policies on lower-income workers and the plight of rural America tend to be underreported, for example). But bias in the digital sphere is structurally different from that in mass media, and a lot more complicated than what programmers believe.
In a largely automated platform like Facebook, what matters most is not the political beliefs of the employees but the structures, algorithms and incentives they set up, as well as what oversight, if any, they employ to guard against deception, misinformation and illegitimate meddling. And the unfortunate truth is that by design, business model and algorithm, Facebook has made it easy for it to be weaponized to spread misinformation and fraudulent content. Sadly, this business model is also lucrative, especially during elections. Sheryl Sandberg, Facebook’s chief operating officer, called the 2016 election “a big deal in terms of ad spend” for the company, and it was. No wonder there has been increasing scrutiny of the platform.
Terrific blog post by Josh Marshall:
I believe what we’re seeing here is a convergence of two separate but highly charged news streams and political moments. On the one hand, you have the Russia probe, with all that is tied to that investigation. On another, you have the rising public backlash against Big Tech, the various threats it arguably poses and its outsized power in the American economy and American public life. A couple weeks ago, I wrote that after working with Google in various capacities for more than a decade I’d observed that Google is, institutionally, so accustomed to its customers actually being its products that when it gets into lines of business where its customers are really customers it really doesn’t know how to deal with them. There’s something comparable with Facebook.
Facebook is so accustomed to treating its ‘internal policies’ as though they were something like laws that they appear to have a sort of blind spot that prevents them from seeing how ridiculous their resistance sounds. To use the cliche, it feels like a real shark jumping moment. As someone recently observed, Facebook’s ‘internal policies’ are crafted to create the appearance of civic concerns for privacy, free speech, and other similar concerns. But they’re actually just a business model. Facebook’s ‘internal policies’ amount to a kind of Stepford Wives version of civic liberalism and speech and privacy rights, the outward form of the things preserved while the innards have been gutted and replaced by something entirely different, an aggressive and totalizing business model which in many ways turns these norms and values on their heads. More to the point, most people have the experience of Facebook’s ‘internal policies’ being meaningless in terms of protecting their speech or privacy or whatever as soon as they bump up against Facebook’s business model.
Spot on. Especially the Stepford Wives metaphor.
This morning’s Observer column:
Next year, 25 May looks like being a significant date. That’s because it’s the day that the European Union’s general data protection regulation (GDPR) comes into force. This may not seem like a big deal to you, but it’s a date that is already keeping many corporate executives awake at night. And for those who are still sleeping soundly, perhaps it would be worth checking that their organisations are ready for what’s coming down the line.
First things first. Unlike much of the legislation that emerges from Brussels, the GDPR is a regulation rather than a directive. This means that it becomes law in all EU countries at the same time; a directive, in contrast, allows each country to decide how its requirements are to be incorporated in national laws…
This morning’s Observer column:
When Edward Snowden first revealed the extent of government surveillance of our online lives, the then foreign secretary, William (now Lord) Hague, immediately trotted out the old chestnut: “If you have nothing to hide, then you have nothing to fear.” This prompted replies along the lines of: “Well then, foreign secretary, can we have that photograph of you shaving while naked?”, which made us laugh, perhaps, but rather diverted us from pondering the absurdity of Hague’s remark. Most people have nothing to hide, but that doesn’t give the state the right to see them as fair game for intrusive surveillance.
During the hoo-ha, one of the spooks with whom I discussed Snowden’s revelations waxed indignant about our coverage of the story. What bugged him (pardon the pun) was the unfairness of having state agencies pilloried, while firms such as Google and Facebook, which, in his opinion, conducted much more intensive surveillance than the NSA or GCHQ, got off scot free. His argument was that he and his colleagues were at least subject to some degree of democratic oversight, but the companies, whose business model is essentially “surveillance capitalism”, were entirely unregulated.
He was right…
Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:
The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.
This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.
Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.
If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?
If you want to understand why fake news will be a hard problem to crack, this is a good place to start.