LATER Nick Carr has a perceptive review of the book in the LA Review of Books. John Thornhill also had a good long review in last Saturday’s Financial Times, sadly behind a paywall.
From an interesting piece by Max Fisher:
We think of any danger as coming from misuse — scammers, hackers, state-sponsored misinformation — but we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses.
Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.
It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.
From today’s New York Times:
SAN FRANCISCO — On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.
A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrated the Sept. 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviation used for the Nazi salute “Heil Hitler.”
The Instagram posts demonstrated a stark reality. Over the last 10 years, Silicon Valley’s social media companies have expanded their reach and influence to the furthest corners of the world. But it has become glaringly apparent that the companies never quite understood the negative consequences of that influence nor what to do about it — and that they cannot put the genie back in the bottle.
“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”
When will this penny drop, one wonders. These companies can’t fix this problem, because their business models depend on allowing people to do what they like — and then reacting, ineffectually, after the fact.
From today’a New York Times:
SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.
Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.
“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.
This — from Bloomberg — is interesting:
Facebook Inc. hasn’t been able to do anything right — except when it comes to making money, where it could do nothing wrong.
That changed on Wednesday, when the company posted disappointing growth in revenue, profits and the number of visitors to its digital hangouts. Results are still stellar by the standards of most companies, but investors in fast-growing technology companies react badly when their high hopes aren’t met, as Netflix recently found out. Facebook hit a record stock price on Wednesday, but after the release of its financial results, its shares dropped a stunning 24 percent in after-hours trading.
And no wonder. The company’s financial results, and especially its glimpse into a more pessimistic financial future, were utter disaster for investors. If what the company predicts comes to pass, the internet’s best combination of fast revenue growth and plump profit margins is dead. All at once, it seemed, reality finally caught up to Facebook.
Well, among other things (including plans for its very own earth-orbiting satellites), those 20,000+ content ‘moderators’ have to be paid for somehow.
Roger McNamee, an early Facebook investor who has been sounding the alarm about the social media giant since the run-up to the 2016 presidential election, is not letting up.
In an interview with the Mercury News, McNamee talked about why he thinks Facebook should be reined in — and possibly broken up.
“It is no exaggeration to say that the AT&T consent decree planted the seed for Silicon Valley,” McNamee wrote. “One of the many fundamental patents in AT&T’s huge portfolio was the transistor. The combination of freely licensable patents and restrictions on AT&T’s ability to enter new markets enabled entrepreneurs to create today’s semiconductor, computer, data communications, mobile technology and software industries, among others.”
McNamee told this news organization that the changes Facebook is making now don’t go far enough, and that “nobody can make them” enact change that would truly address the myriad problems with the platform, including possible manipulation of Facebook’s massive number of users.
“There are 2.2 billion people on Facebook each with their own ‘Truman Show,’ ” McNamee said. “Everybody has their own set of facts.”
In addition, he takes issue with the attitudes of Facebook’s top executives.
Facebook is “almost the same size as Christianity,” McNamee said. “When you are presiding over the largest interconnected organization in the world, that gets to your head after a while.”
Zuckerberg for Pope?
This morning’s Observer column:
One of the few coherent messages to emerge from the US Senate’s bumbling interrogation of Mark Zuckerberg was a touching desire that Facebook’s user agreement should be comprehensible to humans. Or, as Republican Senator John Kennedy of Louisiana put it: “Here’s what everyone’s been trying to tell you today – and I say it gently – your user agreement sucks. The purpose of a user agreement is to cover Facebook’s rear end, not inform users of their rights.”
“I would imagine probably most people do not read the whole thing,” Zuckerberg replied. “But everyone has the opportunity to and consents to it.” Senator Kennedy was unimpressed. “I’m going to suggest you go home and rewrite it,” he replied, “and tell your $1,200 dollar an hour lawyer you want it written in English, not Swahili, so the average American user can understand.”
Since Zuckerberg’s staff are currently so overworked, the Observer is proud to announce that it has drafted a new, human-readable user agreement that honours Zuckerberg’s new commitment to “transparency”. Here it is…
James Fallows quotes from a fascinating email exchange he had with his friend Michael Jones, who used to work at Google (he was the company’s Chief Technology Advocate and later a key figure in the evolution of Google Earth):
So, how might FB fix itself? What might government regulators seek? What could make FaceBook likable? It is very simple. There are just two choices:
a. FB stays in its send-your-PII1-to-their-customers business, and then must be regulated and the customers validated precisely as AXCIOM and EXPERIAN in the credit world or doctors and hospitals in the HIPPA healthcare world; or,
b. FB joins Google and ALL OTHER WEB ADVERTISERS in keeping PII private, never letting it out, and anonymously connecting advertisers with its users for their mutual benefit.
I don’t get a vote, but I like (b) and see that as the right path for civil society. There is no way that choice (a) is not a loathsome and destructive force in all things—in my personal opinion it seems that making people’s pillow-talk into a marketing weapon is indeed a form of evil.
This is why I never use Facebook; I know how the sausage is made.
PII = Personally Identifiable Information ↩