From an interesting piece by Max Fisher:
We think of any danger as coming from misuse — scammers, hackers, state-sponsored misinformation — but we’re starting to understand the risks that come from these platforms working exactly as designed. Facebook, YouTube and others use algorithms to identify and promote content that will keep us engaged, which turns out to amplify some of our worst impulses.
Even after reporting with Amanda Taub on algorithm-driven violence in Germany and Sri Lanka, I didn’t quite appreciate this until I turned on Facebook push alerts this summer. Right away, virtually every gadget I owned started blowing up with multiple daily alerts urging me to check in on my ex, even if she hadn’t posted anything. I’d stayed away from her page for months specifically to avoid training Facebook to show me her posts. Yet somehow the algorithm had correctly identified this as the thing likeliest to make me click, then followed me across continents to ensure that I did.
It made me think of the old “Terminator” movies, except instead of a killer robot sent to find Sarah Connor, it’s a sophisticated set of programs ruthlessly pursuing our attention. And exploiting our most human frailties to do it.
John Thornhill had an interesting column in the Financial Times the other day (sadly, behind a paywall) about Moore’s Law and the struggles of the tech industry to overcome the physical barriers to its continuance.
This led me to brood on one of the under-discussed aspects of the Law, namely the way it has enabled the AI crowd to dodge really awkward questions for years. It works like this: If the standard-issue AI of a particular moment in time proves unable to perform a particular task or solve a particular problem, then the strategy is to say (confidently): “yes but Moore’s Law will eventually provide the computing power to crack it”.
And sometimes that’s true. The difficulty, though, is that it assumes that all problems are practical — i.e. ones that are ultimately computable. But some tasks/problems are almost certainly not computable. And so there are times when our psychic addiction to Moore’s Law leads us to pursue avenues which are, ultimately, dead ends. But few people dare admit that, especially when hype-storms are blowing furiously.
It’s a simple formula (ALDC), really, as the New York Times explains:
Harvard gives advantages to recruited athletes (A’s); legacies (L’s), or the children of Harvard graduates; applicants on the dean’s or director’s interest list (D’s), which often include the children of very wealthy donors and prominent people, mostly white; and the children (C’s) of faculty and staff. ALDCs make up only about 5 percent of applicants but 30 percent of admitted students.
While being an A.L.D.C. helps — their acceptance rate is about 45 percent, compared with 4.5 to 5 percent for the rest of the pool — it is no guarantee. (One of those rejected despite being a legacy was the judge in the federal case, Allison D. Burroughs. She went to Middlebury College instead.)
Harvard’s witnesses said it was important to preserve the legacy advantage because it encourages alumni to give their time, expertise and money to the university.
Which is how you get to have a hedge fund with a nice university attached.