Julian Barnes wrote a wonderful LRB Diary piece about Brexit. I particularly liked this passage:
And what is the Brexiteers’ vision of our future, purified nation? It seems to be a mixture of Merrie England, Toytown and Singapore. Outward-looking in the sense of ‘open for business’, which tends to mean ‘up for sale’. Inward-looking in other senses. Morally depleted by cutting ourselves off from Europe and sheltering beneath Trump’s fragrant armpit. What might we end up as? Perhaps a kind of Bigger Belgium with quasi-American values – also, as Belgium might be, torn into separate nations again. Do we seriously think that those who voted for Brexit are going to be better off under this state-shrinking government? (I can’t recall the slogan ‘Poorer but Happier’ being used.) That the NHS will be properly funded? That the increasing numbers on zero-hours will not be exploited further? That the old winners will be the new, even bigger winners? Do we seriously believe that Mrs May will construct ‘a country that works for everyone’? To the pieties of our current political elite, I much prefer the old Portuguese proverb: ‘If shit were valuable, the poor would be born without arses.’
This morning’s Observer column:
What has come to be called “fake news” is a hard problem to solve, if indeed it is solvable at all. This is because it is created by the interaction of human psychology with several forces: the affordances of digital technology, the business models of giant internet companies and the populist revolt against globalisation. But that hasn’t stopped people trying to solve the problem.
To date, most well-intentioned people have gone down the “fact-checking” route, on the assumption that if only people knew the facts then that would stop them believing lies. This suggests a touching faith in human nature. People have been believing nonsensical things since the beginning of time and nothing we have seen recently indicates that they plan to change the habits of millenniums.
Think, for example, of the infamous lie put about by the Leave campaign in the referendum – that the £350m that the UK supposedly pays every week to the EU could be better spent on the NHS…
Kevin Kelly: “The AI Cargo Cult: The Myth of a Superhuman AI”
Kevin Kelly often annoys me (see his What Technology Wants, but this is rather good, not least because it argues that the biggest flaw in the ‘superintelligence’-as-existential-risk argument is its assumption that intelligence is a one-dimensional attribute. Nobody who has read Howard Gardner can agree with that proposition.
“Jill Lepore on the Challenge of Explaining Things”
Terrific interview with the Harvard historian and New Yorker writer. Contains this wonderful passage:
“I only ever wanted to be a writer. I love history, and I especially love teaching history, but I never intended to become an academic, and I’m baffled by the idea that reaching a wider audience involves using smaller words, as if there’s some inverse correlation between the size of your audience and of your vocabulary. You don’t talk about, say, technological determinism to a freshman the same way you talk about it to a colleague, right? Is it easier to talk to a freshman? No, it’s harder. Is it more important to give that student a clear explanation of the concept than it is to chat with your colleague about it? I think so, though I suppose that’s debatable. I love the challenge of explaining things to other people, in the same way that I love other people explaining things to me. I love being a student. Nothing is so thrilling as diving into scholarship I’ve never encountered before and trying to get my bearings, learning what so many scholars have been piecing together over a very long period of time, and trying to figure out how to bring that learning to bear on a problem that I, like a lot of people both inside and outside the academy, happen to be struggling with. The hitch is getting the scholarship right. I always worry I’ve missed something, or distorted something, or failed to understand the big picture. That’s the downside: missing something crucial. Nothing is more concerning, or more discouraging, than getting something wrong; there’s no real way to right it. It’s horrible; it kills me.”
Maciej Cegłowski: “Build a Better Monster: Morality, Machine Learning, and Mass Surveillance”
Terrific, thoughful essay on the dystopia we have been building. ANd what we might do about it.
“When steam power will be perfected, when, together with telegraphy and railways, it will have made distances disappear, it will not only be commodities which travel, but also ideas which will have wings. When fiscal and commercial barriers have been abolished between different states, as they have already been between the provinces of the same state; when different countries, in daily relations, tend toward the unity of peoples, how will you be able to revive the old mode of separation?”
Francois-René de Chateaubriand, 1841
Well, well. Here we go again. From the Boston Globe:
WASHINGTON — A white cloth napkin, now displayed in the National Museum of American History, helped change the course of modern economics. On it, the economist Arthur Laffer in 1974 sketched a curve meant to illustrate his theory that cutting taxes would spur enough economic growth to generate new tax revenue.
More than 40 years after those scribblings, President Donald Trump is reviving the so-called Laffer curve as he is set to announce the broad outlines of a tax overhaul on Wednesday. What the first President George Bush once called “voodoo economics” is back, as Trump’s advisers argue that deep cuts in corporate taxes will ultimately pay for themselves with an explosion of new business and job creation.
The Laffer curve postulates that no tax revenue will be raised at the extreme tax rates of 0% and 100% and that there must be at least one rate which maximizes government taxation revenue. The Laffer curve is typically represented as a graph which starts at 0% tax with zero revenue, rises to a maximum rate of revenue at an intermediate rate of taxation, and then falls again to zero revenue at a 100% tax rate. The shape of the curve is uncertain and disputed.1
One implication of the Laffer curve is that increasing tax rates beyond a certain point will be counter-productive for raising further tax revenue…
As the Globe observes:
what the president has called a tax reform plan is looking more like a tax cut plan, showering taxpayers with rate reductions without offsetting the full cost by closing loopholes or raising taxes elsewhere. In the short run, such a plan would add many billions of dollars to the national deficit. Trump contends that it will be worth it in the long run.
“The tax plan will pay for itself with economic growth,” Steven Mnuchin, the Treasury secretary and main architect of the plan, told reporters this week.)
Questions: does any serious economist believe this? And isn’t it interesting that the proposed tax cuts will — coincidentally — benefit the Trump family and its subsidiaries?
Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:
The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.
This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.
Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.
If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?
If you want to understand why fake news will be a hard problem to crack, this is a good place to start.
Google’s Chrome browser is popular world wide. And it turns out that many of its users don’t like ads — which is very naughty of them in an ad-based universe. But now there are rumours that Google plans to incorporate some kind of blocking of “unacceptable” ads in its browser. Which of course might be welcome to many users. But it would also make Google the arbiter of what is “unacceptable”.