“Don’t be evil” was the mantra of the co-founders of Google, Sergey Brin and Larry Page, the graduate students who, in the late 1990s, had invented a groundbreaking way of searching the web. At the time, one of the things the duo believed to be evil was advertising. There’s no reason to doubt their initial sincerity on this matter, but when the slogan was included in the prospectus for their company’s flotation in 2004 one began to wonder what they were smoking. Were they really naive enough to believe that one could run a public company on a policy of ethical purity?
The problem was that purity requires a business model to support it and in 2000 the venture capitalists who had invested in Google pointed out to the boys that they didn’t have one. So they invented a model that involved harvesting users’ data to enable targeted advertising. And in the four years between that capitulation to reality and the flotation, Google’s revenues increased by nearly 3,590%. That kind of money talks.
Sign up for Bookmarks: discover new books in our weekly email
Rana Foroohar has adopted the Google mantra as the title for her masterful critique of the tech giants that now dominate our world…
Proud announcement from Facebook:
Today, we removed four separate networks of accounts, Pages and Groups for engaging in coordinated inauthentic behavior on Facebook and Instagram. Three of them originated in Iran and one in Russia, and they targeted a number of different regions of the world: the US, North Africa and Latin America. All of these operations created networks of accounts to mislead others about who they were and what they were doing. We have shared information about our findings with law enforcement, policymakers and industry partners.
We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people.
To which Charles Arthur comments: “I thought manipulating people was basically the point.” Which it is. It’s just that apparently some kinds of manipulation are verboten. And of course, as Charles says, this is just the stuff they’re catching.
You have to hand it to Elizabeth Warren sometimes. Annoyed (as I am) about Facebook’s insistence on continuing to allow untruthful political ads to run on the platform, Warren placed an untruthful ad herself to see what happened (and, clearly, to annoy Mark Zuckerberg. Here’s the gist of the NYT report of the jape:
The Democratic presidential candidate bought a political ad on the social network this past week that purposefully includes false claims about Facebook’s chief executive, Mark Zuckerberg, and President Trump to goad the social network to remove misinformation in political ads ahead of the 2020 presidential election.
The ad, placed widely on Facebook beginning on Thursday, starts with Ms. Warren announcing “Breaking news.” The ad then goes on to say that Facebook and Mr. Zuckerberg are backing the re-election of Trump. Neither Mr. Zuckerberg nor the Silicon Valley company has announced their support of a candidate.
“You’re probably shocked, and you might be thinking, ‘how could this possibly be true?’ Well, it’s not,” Ms. Warren said in the ad.
In a series of tweets on Saturday, Ms. Warren, a senator from Massachusetts, said she had deliberately made an ad with lies because Facebook had previously allowed politicians to place ads with false claims. “We decided to see just how far it goes,” Ms. Warren wrote, calling Facebook a “disinformation-for-profit machine” and adding that Mr. Zuckerberg should be held accountable.
A new report from the Computational Propaganda group at the Oxford Internet Institute shows that states are increasingly using weaponising social media for information supression, disinformation and political manipulation. The researchers found “evidence of organized social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically.”
social media has been exploited by authoritarian regimes in 26 countries to suppress basic human rights, discredit political opponents and drown out dissenting opinions.
A handful of sophisticated state actors use computational propaganda for foreign influence operations. Facebook and Twitter attributed foreign influence operations to seven countries (China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela) who have used these platforms to influence global audiences.
China has become a major player in the global disinformation order. Until the 2019 protests in Hong Kong, most evidence of Chinese computational propaganda occurred on domestic platforms such as Weibo, WeChat, and QQ. But China’s new-found interest in aggressively using Facebook, Twitter, and YouTube should raise concerns for democracies.
Facebook remains the platform of choice for social media manipulation. In 56 countries, the researchers found evidence of formally organized computational propaganda campaigns on Facebook. Interestingly, the exploitation of Facebook’s targeted advertising machineseems to be on the decline. In the case studies the researchers studied, advertising was not central to the spread of disinformation. Instead the campaigns created memes, videos or other kinds of content tailored to exploit platforms’ algorithms and their amplifying effects — effectively getting virality for free.
There’s a good NYT report summarising the researchers’ findings.
From The Register:
Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.
The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.
The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.
This morning’s Observer column:
Now that Wimbledon is over, if you’re looking for something interesting to watch, can I suggest heading over to the video of last week’s interrogation by the US Senate committee on banking, housing and urban affairs of Facebook’s David Marcus? Given the astonishing incompetence of the Senate’s inquisition of Marcus’s boss, Mark Zuckerberg, some time ago, my hopes for last week’s hearing were not high. How wrong can you be?
But first a bit of background might be helpful. Facebook, currently the tech world’s most toxic company, has decided to get into the currency business. It proposes to launch a new global cryptocurrency called Libra. Marcus is the guy leading this project. He formerly worked at PayPal and then moved to Facebook, where he ran the company’s Messenger service.
At first sight, Marcus appears to be a Smooth Man from central casting. At second sight, he evokes the “uncanny valley”, defined by Wikipedia as “a hypothesised relationship between the degree of an object’s resemblance to a human being and the emotional response to such an object”. In that respect, he is not unlike his boss…
Today’s Observer comment piece
If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!
This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…
Facebook built internal tools to manage its damaged reputation when it should’ve been managing bigger issues. A Bloomberg report found that starting in 2016, Facebook developed and deployed two internal tools, dubbed Stormchaser and Night’s Watch, to track and combat misinformation about the company and its CEO Mark Zuckerberg. The tools also measured shifting public sentiment towards Facebook and its leaders.
Why it’s a big deal: Facebook was devoting its resources to managing its own reputation at a time when fake news and political manipulation were propagating on its platform.
What happens now: Facebook told Bloomberg it’s stopped using its Stormchaser tool, but the technology still exists. [Mark Bergen and Kurt Wagner / Bloomberg]
From Technology Review:
The People’s Bank of China is paying close attention to Libra, the digital currency Facebook has created. And it may inspire the bank to accelerate its plans to speed up its own project to develop a digital currency.
The news: The PBOC is paying “high attention” to Libra, according to Wang Xin, director of the bank’s research bureau. Speaking at an academic conference at the University of Peking, Wang expressed concern over how Libra might affect the world’s financial system if it takes off, according to the South China Morning Post: “Would it be able to function like money, and accordingly, have a large influence on monetary policy, financial stability, and the international monetary system?”
One thing China wants to know is what role the US dollar will play in the basket of fiat currencies that will supposedly back Libra coins. If it is most closely associated with the dollar, Wang said, “there would be in essence one boss, that is the US dollar and the United States. If so, it would bring a series of economic, financial, and even international political consequences.”
Sarah Roberts has just published Behind the Screen: Content Moderation in the Shadows of Social Media, a major study of the impact of content ‘moderation’ on those who clean up social media so that the rest of us are not traumatised or scandalised by what appears in our feeds. Isaac Chotiner has an interesting interview with her in the New Yorker which includes this brief exchange:
You also go to the Philippines in this book and you talk to people from other countries, in Mexico, for example. What are the consequences of outsourcing these jobs in terms of the quality of the work being done? And I don’t ask that to imply that people abroad can’t do a job as well.
I think there is a precedent for outsourcing this type of service work, and we see that in the call-center industry. The same kinds of problems that are present in that work are present in this particular context. So that would be things like the dissonance and distance culturally and linguistically, contextually, and politically, for a group of people that are being asked to adjudicate and make decisions about material that emanates from one place in the world and is destined for another, that may have absolutely nothing to do with their day-to-day life.
I think a second thing is that the marketplace has chased a globalization solution for the same reasons it has in other industries, which are the issues of: Where can we get the cheapest labor? What countries are lax in terms of labor protections? Where is organizing low? Where is there a huge pool of people for whom this job might be appealing because it’s better than the other jobs on offer? It’s not a simple case of everyone in the Philippines who does this work is exploited, and I was really trying hard not to make that claim in the book. But, at the same time, the United States sends the work to the Philippines for a reason. It sends the work there because Filipino people have a long-standing relationship, so to speak, with the United States, that means that they have a better facility to understand the American context. That’s actually been in the favor of most people in the Philippines.
It’s worrisome to see those kinds of colonial traditions and practices picked up again, especially in this digital marketplace, this marketplace of the mind that was supposed to be deliverance from so many of the difficult working conditions of the twentieth century. So I think that’s the big thing about the way that this plays out on the global stage. The companies have a problem that they don’t have enough people to do the work. And so they are pulling out all the stops in a way to find people to do the work, but it’s still not nearly enough.
What could be done to make the lives of these workers better, given that this is a job that needs to be done? And it needs to be done by smart people doing it well, who need to be very well-trained.
This is a question that I’ve often posed to the workers themselves because I certainly am not possessed of the answers on my own. They want better pay. And I think we can read that in a lot of ways: they want better pay, they want to be respected. The nature of the way the work has been designed has been for the work to be secret. In many cases, their N.D.A. precludes them from even talking about the work. And the industry itself formulated the job as a source of shame in that sense, an industry source of shame. They were not eager to tout the efforts of these people, and so instead they hid them in the shadows. And, if nothing else, that was a business decision and a value judgment that could have gone another way. I think there’s still a chance that we could understand the work of these people in a different way and value it differently, collectively. And we could ask that the companies do that as well.
Good interview. Splendid book.