Understanding platforms

From an interesting piece by Max Fisher:

We think of any danger as coming from misuse — scammers, hackers, state-sponsored misinformation — but we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses.

Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.

It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.

Anti-semitism continues to thrive online

From today’s New York Times:

SAN FRANCISCO — On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.

A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrated the Sept. 11 terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviation used for the Nazi salute “Heil Hitler.”

The Instagram posts demonstrated a stark reality. Over the last 10 years, Silicon Valley’s social media companies have expanded their reach and influence to the furthest corners of the world. But it has become glaringly apparent that the companies never quite understood the negative consequences of that influence nor what to do about it — and that they cannot put the genie back in the bottle.

“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”

When will this penny drop, one wonders. These companies can’t fix this problem, because their business models depend on allowing people to do what they like — and then reacting, ineffectually, after the fact.

Facebook: another routine scandal

From today’a New York Times:

SAN FRANCISCO — On the same day Facebook announced that it had carried out its biggest purge yet of American accounts peddling disinformation, the company quietly made another revelation: It had removed 66 accounts, pages and apps linked to Russian firms that build facial recognition software for the Russian government.

Facebook said Thursday that it had removed any accounts associated with SocialDataHub and its sister firm, Fubutech, because the companies violated its policies by scraping data from the social network.

“Facebook has reason to believe your work for the government has included matching photos from individuals’ personal social media accounts in order to identify them,” the company said in a cease-and-desist letter to SocialDataHub that was dated Tuesday and viewed by The New York Times.

Could reality be catching up with Facebook?

This — from Bloomberg — is interesting:

Facebook Inc. hasn’t been able to do anything right — except when it comes to making money, where it could do nothing wrong.

That changed on Wednesday, when the company posted disappointing growth in revenue, profits and the number of visitors to its digital hangouts. Results are still stellar by the standards of most companies, but investors in fast-growing technology companies react badly when their high hopes aren’t met, as Netflix recently found out. Facebook hit a record stock price on Wednesday, but after the release of its financial results, its shares dropped a stunning 24 percent in after-hours trading.

And no wonder. The company’s financial results, and especially its glimpse into a more pessimistic financial future, were utter disaster for investors. If what the company predicts comes to pass, the internet’s best combination of fast revenue growth and plump profit margins is dead. All at once, it seemed, reality finally caught up to Facebook.

Well, among other things (including plans for its very own earth-orbiting satellites), those 20,000+ content ‘moderators’ have to be paid for somehow.

Zuckerberg for Pope?

Roger McNamee, an early Facebook investor who has been sounding the alarm about the social media giant since the run-up to the 2016 presidential election, is not letting up.

In an interview with the Mercury News, McNamee talked about why he thinks Facebook should be reined in — and possibly broken up.

“It is no exaggeration to say that the AT&T consent decree planted the seed for Silicon Valley,” McNamee wrote. “One of the many fundamental patents in AT&T’s huge portfolio was the transistor. The combination of freely licensable patents and restrictions on AT&T’s ability to enter new markets enabled entrepreneurs to create today’s semiconductor, computer, data communications, mobile technology and software industries, among others.”

McNamee told this news organization that the changes Facebook is making now don’t go far enough, and that “nobody can make them” enact change that would truly address the myriad problems with the platform, including possible manipulation of Facebook’s massive number of users.

“There are 2.2 billion people on Facebook each with their own ‘Truman Show,’ ” McNamee said. “Everybody has their own set of facts.”

In addition, he takes issue with the attitudes of Facebook’s top executives.

Facebook is “almost the same size as Christianity,” McNamee said. “When you are presiding over the largest interconnected organization in the world, that gets to your head after a while.”

Zuckerberg for Pope?

Facebook’s Terms & Conditions in human-readable form

This morning’s Observer column:

One of the few coherent messages to emerge from the US Senate’s bumbling interrogation of Mark Zuckerberg was a touching desire that Facebook’s user agreement should be comprehensible to humans. Or, as Republican Senator John Kennedy of Louisiana put it: “Here’s what everyone’s been trying to tell you today – and I say it gently – your user agreement sucks. The purpose of a user agreement is to cover Facebook’s rear end, not inform users of their rights.”

“I would imagine probably most people do not read the whole thing,” Zuckerberg replied. “But everyone has the opportunity to and consents to it.” Senator Kennedy was unimpressed. “I’m going to suggest you go home and rewrite it,” he replied, “and tell your $1,200 dollar an hour lawyer you want it written in English, not Swahili, so the average American user can understand.”

Since Zuckerberg’s staff are currently so overworked, the Observer is proud to announce that it has drafted a new, human-readable user agreement that honours Zuckerberg’s new commitment to “transparency”. Here it is…

Read on

Fixing Facebook: the only two options by a guy who knows how the sausage is made

James Fallows quotes from a fascinating email exchange he had with his friend Michael Jones, who used to work at Google (he was the company’s Chief Technology Advocate and later a key figure in the evolution of Google Earth):

So, how might FB fix itself? What might government regulators seek? What could make FaceBook likable? It is very simple. There are just two choices:

a. FB stays in its send-your-PII1-to-their-customers business, and then must be regulated and the customers validated precisely as AXCIOM and EXPERIAN in the credit world or doctors and hospitals in the HIPPA healthcare world; or,

b. FB joins Google and ALL OTHER WEB ADVERTISERS in keeping PII private, never letting it out, and anonymously connecting advertisers with its users for their mutual benefit.

I don’t get a vote, but I like (b) and see that as the right path for civil society. There is no way that choice (a) is not a loathsome and destructive force in all things—in my personal opinion it seems that making people’s pillow-talk into a marketing weapon is indeed a form of evil.

This is why I never use Facebook; I know how the sausage is made.


  1. PII = Personally Identifiable Information 

Facebook is just the tip of the iceberg

This morning’s Observer column:

If a picture is worth a thousand words, then a good metaphor must be worth a million. In an insightful blog post published on 23 March, Doc Searls, one of the elder statesman of the web, managed to get both for the price of one. His post was headed by one of those illustrations of an iceberg showing that only the tip is the visible part, while the great bulk of the object lies underwater. In this case, the tip was adorned with the Facebook logo while the submerged mass represented “Every other website making money from tracking-based advertising”. The moral: “Facebook’s Cambridge Analytica problems are nothing compared to what’s coming for all of online publishing.”

The proximate cause of Searls’s essay was encountering a New York Times op-ed piece entitled Facebook’s Surveillance Machine by Zeynep Tufekci. It wasn’t the (unexceptional) content of the article that interested Searls, however, but what his ad-blocking software told him about the Times page in which the essay appeared. The software had detected no fewer than 13 hidden trackers on the page. (I’ve just checked and my Ghostery plug-in has detected 19.)

Read on

“The business model of the Internet is surveillance” contd.

This useful graphic comes from a wonderful post by the redoubtable Doc Searls about the ultimate unsustainability of the business model currently dominating the Web. He starts with a quote from “Facebook’s Surveillance Machine” — a NYT OpEd column by the equally-redoubtable Zeynep Tufecki:

“Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Doc then points out the irony of his Privacy Badger software detecting 13 hidden trackers on the NYT page on which Zeynep’s column appears. (I’ve just checked and Ghostery currently detects 19 trackers on it.)

The point, Doc goes on to say, is that the Times is just doing what every other publication that lives off adtech does: tracking-based advertising. “These publications”,

don’t just open the kimonos of their readers. They bring people’s bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of returning “interest-based” advertising to those same people.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach or experience), and damn little care or control by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data? No one entity, that’s for sure.

Doc points out that on reputable outfits like the New York Times writers like Zeynep have nothing to do with this endemic tracking. In such publications there probably is a functioning “Chinese Wall” between editorial and advertising. Just to drive the point home he looks at Sue Halpern’s piece in the sainted New Yorker on “Cambridge Analytica, Facebook and the Revelations of Open Secrets” and his RedMorph software finds 16 third-party trackers. (On my browser, Ghostery found 18.) The moral is, in a way, obvious: it’s a confirmation of Bruce Schneier’s original observation that “surveillance is the business model of the Internet”. Being a pedant, I would have said “of the Web”, but since many people can’t distinguish between the two, we’ll leave Bruce’s formulation stand.