Amazon Prime cat

Whenever a packet arrives from Amazon, our cats insist on exploring it after it’s been opened. Here’s the latest explorer testing it as a possible new residence.

Understanding platforms

From an interesting piece by Max Fisher:

We think of any danger as coming from misuse — scammers, hackers, state-sponsored misinformation — but we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses.

Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.

It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.

Blind faith in Moore’s Law sometimes leads to dead ends

John Thornhill had an interesting column in the Financial Times the other day (sadly, behind a paywall) about Moore’s Law and the struggles of the tech industry to overcome the physical barriers to its continuance.

This led me to brood on one of the under-discussed aspects of the Law, namely the way it has enabled the AI crowd to dodge really awkward questions for years. It works like this: If the standard-issue AI of a particular moment in time proves unable to perform a particular task or solve a particular problem, then the strategy is to say (confidently): “yes but Moore’s Law will eventually provide the computing power to crack it”.

And sometimes that’s true. The difficulty, though, is that it assumes that all problems are practical — i.e. ones that are ultimately computable. But some tasks/problems are almost certainly not computable. And so there are times when our psychic addiction to Moore’s Law leads us to pursue avenues which are, ultimately, dead ends. But few people dare admit that, especially when hype-storms are blowing furiously.

Posted in AI

How to get into Harvard

It’s a simple formula (ALDC), really, as the New York Times explains:

Harvard gives advantages to recruited athletes (A’s); legacies (L’s), or the children of Harvard graduates; applicants on the dean’s or director’s interest list (D’s), which often include the children of very wealthy donors and prominent people, mostly white; and the children (C’s) of faculty and staff. ALDCs make up only about 5 percent of applicants but 30 percent of admitted students.

While being an A.L.D.C. helps — their acceptance rate is about 45 percent, compared with 4.5 to 5 percent for the rest of the pool — it is no guarantee. (One of those rejected despite being a legacy was the judge in the federal case, Allison D. Burroughs. She went to Middlebury College instead.)

Harvard’s witnesses said it was important to preserve the legacy advantage because it encourages alumni to give their time, expertise and money to the university.

Which is how you get to have a hedge fund with a nice university attached.

Machine translation: some way to go yet

Lovely example from Mark Liberman:

I tried a chapter-opening from a roman policier that I was reading (Yasmina Khadra, Le Dingue au Bistouri): “Il y a quatre choses que je déteste. Un: qu’on boive dans mon verre. Deux: qu’on se mouche dans un restaurant. Trois: qu’on me pose un lapin.

Google Translate: There are four things I hate. A: we drink in my glass. Two: we will fly in a restaurant. Three: I get asked a rabbit.

Bing Translate: There are four things that I hate. One: that one drink in my glass. Two: what we fly in a restaurant. Three: only asked me a rabbit.

Should be: There are four things I hate. One: that somebody drinks from my glass. Two: that somebody blows their nose in a restaurant. Three: that somebody stands me up.

These mistakes underline some general remaining difficulties. One: the treatment of pronouns. Two: the treatment of idioms that are not common in the bilingual training material. Three: the lack of common sense.

Note that last point.

And, er, where’s the fourth ‘hate’? It’s in neither the original nor the translations.

Zuckerberg’s monster

Here’s an edited version of a chapter I’ve written in a newly-published book – Anti-Social Media: The Impact on Journalism and Society, edited by John Mair, Tor Clark, Neil Fowler, Raymond Snoddy and Richard Tait, Abramis, 2018.

Ponder this: in 2004 a Harvard sophomore named Zuckerberg sits in his dorm room hammering away at a computer keyboard. He’s taking an idea he ‘borrowed’ from two nice-but-dim Harvard undergraduates and writing the computer code needed to turn it into a social-networking site. He borrows $1,000 from his friend Eduardo Saverin and puts the site onto an internet web-hosting service. He calls it ‘The Facebook’.

Fourteen years later, that kid has metamorphosed into the 21st-century embodiment of John D Rockefeller and William Randolph Hearst rolled into one. In the early 20th century, Rockefeller controlled the flow of oil while Hearst controlled the flow of information. In the 21st century Zuckerberg controls the flow of the new oil (data) and the information (because people get much of their news from the platform that he controls). His empire spans more than 2.2bn people, and he exercises absolute control over it — as a passage in the company’s 10-K SEC filing makes clear. It reads, in part…

Read on

Conspiracist thinking and social media

This morning’s Observer column:

The prevalence of conspiracy theories online explains why they tend to crop up whenever we track the cognitive path of someone who, like the alleged Pittsburgh killer, commits or attempts to commit an atrocity. A case in point is Dylann Roof, a South Carolina teenager who one day came across the term “black on white crime” on Wikipedia, entered that phrase into Google and wound up at a deeply racist website inviting him to wake up to a “reality” that he had never considered, from which it was but a short step into a vortex of conspiracy theories portraying white people as victims. On 17 June 2015, Roof joined a group of African American churchgoers in Charleston, South Carolina, before opening fire on them, killing nine.

We find a similar sequence in the case of Cesar Sayoc, the man accused of sending mail bombs to prominent Democrats. Until 2016, his Facebook postings looked innocuous: decadent meals, gym workouts, scantily clad women and sports games – what the New York Times described as “the stereotypical trappings of middle-age masculinity”.

But then something changed. He opened a Twitter account posting links to fabricated rightwing stories and attacking Hillary Clinton. And his Facebook posts began to overflow with pro-Trump images, news stories about Muslims and Isis, ludicrous conspiracy theories and clips from Fox News…

Read on

How Facebook’s advertising machine enables ‘custom audiences’ that include anti-semites and white supremacists

This is beginning to get routine. I’ve said for some time that if you really want to understand Facebook, then you have to go in as an advertiser (i.e. the real customer) rather than as a mere user. When you do that, you come face-to-face with the company’s amazingly helpful, automated system for helping you to choose the ‘custom audiences’ that you want to — or should be — targeting. A while back, Politico did a memorable experiment on these lines. Now The Intercept has done the same:

Earlier this week, The Intercept was able to select “white genocide conspiracy theory” as a pre-defined “detailed targeting” criterion on the social network to promote two articles to an interest group that Facebook pegged at 168,000 users large and defined as “people who have expressed an interest or like pages related to White genocide conspiracy theory.” The paid promotion was approved by Facebook’s advertising wing. After we contacted the company for comment, Facebook promptly deleted the targeting category, apologized, and said it should have never existed in the first place.

Our reporting technique was the same as one used by the investigative news outlet ProPublica to report, just over one year ago, that in addition to soccer dads and Ariana Grande fans, “the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or, ‘History of “why jews ruin the world.”’”