Extracting the moral signal from the populist noise

Apropos that earlier post, I was struck by this essay by danah boyd, and particularly by this passage:

If we don’t account for how people feel, we’re not going to achieve a more just world — we’re going to stoke the fires of a new cultural war as society becomes increasingly polarized.

The disconnect between statistical data and perception is astounding. I can’t help but shake my head when I listen to folks talk about how life is better today than it ever has been in history. They point to increased lifespan, new types of medicine, decline in infant mortality, and decline in poverty around the world. And they shake their heads in dismay about how people don’t seem to get it, don’t seem to get that today is better than yesterday. But perception isn’t about statistics. It’s about a feeling of security, a confidence in one’s ecosystem, a belief that through personal effort and God’s will, each day will be better than the last. That’s not where the vast majority of people are at right now. To the contrary, they’re feeling massively insecure, as though their world is very precarious.

I am deeply concerned that the people whose values and ideals I share are achieving solidarity through righteous rhetoric that also produces condescending and antagonistic norms. I don’t fully understand my discomfort, but I’m scared that what I’m seeing around me is making things worse.

There’s no technological fix for the mess we’re in

Amanda Hess has a thoughtful piece in the NYT about proposed tech fixes for the ‘filter bubble’ problem that has supposedly fractured democratic discourse in the US and elsewhere. She lists numerous well-meaning attempts — from browser plug-ins to iPhone apps — to use technology to help Internet users escape their personal bubbles.

It goes without saying that the motives behind these initiatives are good. (Ms Hess calls them “kumbaya vibes”.) The question is whether they really address the problem, which is rooted in human psychology — confirmation bias, homophily, etc.

The same social media networks that helped build the bubbles are now being framed as the solution, with just a few surface tweaks. On the internet, the “echo chambers” of old media — the ’90s buzzword for partisan talk radio shows and political paperbacks — have been amplified and automated. We no longer need to channel-surf to Fox News or MSNBC; unseen algorithms on Facebook learn to satisfy our existing preferences, so it doesn’t feel like we’re choosing an ideological filter at all.

But now, no entity is playing the filter bubble crisis more than Facebook itself. The company’s leader, Mark Zuckerberg, has published a manifesto of sorts, “Building Global Community,” which jockeys for Facebook to seize a central role in opening our minds by exposing us to new ideas.

Just last summer, the company was whistling a different tune. In a blog post called “Building a Better News Feed for You,” Facebook declared that the information it serves up is “subjective, personal, and unique — and defines the spirit of what we hope to achieve.” That all seemed harmless when the network was a site for reconnecting with old high school friends, but now Facebook is a major driver of news. (A Pew study from last year found that 62 percent of Americans get news on social media.) And as Mr. Trump rose, Facebook found itself assailed by critics blaming it for eroding the social fabric and contributing to the downfall of democracy. Facebook gave people what they wanted, they said, but not what they needed. So now it talks of building the “social infrastructure” for a “civically-engaged community.” Mr. Zuckerberg quoted Abraham Lincoln as inspiration for Facebook’s next phase.

Ms Hess also astutely points out that some of these ideas have partisan roots. The new tools for providing liberals with an insight into how other people think have a whiff of utilitarianism. The philosophy is that to win next time — and restore the old neoliberal order — we just need to know what the hoi-polloi are thinking. Which is a neat way of avoiding what really needs to happen, namely for ruling elites to hear the signal in the populist noise and accept the need to revise the way they think about politics and the world. As Carleigh Morgan, perceptively puts it, “exposure to new ideas and a commitment to listening are not the same”. Or, as Ms Hass puts it,

President Trump’s critics feel the practical need to break down these ideological cocoons, so they can win next time. Charlie Sykes, a former conservative radio talk show host who was blindsided by Mr. Trump’s win, now writes of the need to dismantle the “tribal bubble” of modern American politics, where citizens are informed through partisan media and bullied into submission by Twitter mobs. And Sam Altman, the president of the start-up incubator Y Combinator, recently set out from the liberal Silicon Valley and traveled across America to better understand the perspectives of Trump voters. His final question to them: “What would convince you not to vote for him again?”

It will be more difficult to entice Trump supporters to consider alternative perspectives, and not just because the president himself has declared the mainstream media the “opposition party.” As members of the winning team, Trump supporters have no urgent need to understand the other side.

Very good, thought-provoking piece. And repeat after me: there is no purely technological fix for the mess we’ve got ourselves into.

Understanding Snapchat

Now that the IPO has valued Snap at $30B, perhaps the adult world will grasp that there’s something interesting here. Joel Stein has had a pretty good go at explaining it for them. Here’s the gist from his piece in Time:

Snapchat makes visual communication so frictionless that, according to Nielsen, it is used by roughly half of 18-to-34-year-olds, which is about seven times better than any TV network. Those who use it daily open the app 18 times a day for a total of nearly 30 minutes. Last fall, Snapchat passed Instagram and Facebook as the most important social network in the semiannual Taking Stock With Teens poll by the investment bank Piper Jaffray. Tweens used to count the days until they turned 13 so they could open a Facebook account; now they often don’t bother. And just as Facebook matured years ago, Snapchat is starting to be used by adults. The company says the app is now used by 158 million people daily, though that growth has slowed a bit lately.

Snapchat’s ethos is largely about the seemingly contrary values of control and fun: the company prospectus is one of the few in Wall Street history to use the word poop, employing it to explain just how often people use their smartphones. Snapchat gives users such tight control of their disappearing messages so that they feel safe taking an imperfect photo or video, and then layering information on top of it in the form of text, devil horns you can draw with your finger, a sticker that says “U Jelly?” or a filter that turns your face into a corncob that spits popcorn from your mouth when you talk. Snapchat is aware that most of our conversations are stupid.

But we want to keep our dumb conversations private. When Snapchat first launched, adults assumed it was merely a safe way for teens to send nude pictures, because adults are pervs. But what Spiegel understood is that teens wanted a safe way to express themselves.

Many teens are so worried about projecting perfection on Instagram that they create Finstagram (fake Instagram) profiles that only their friends know about. “Teens are very, very interested in safety, including something they call ’emotional safety,'” says San Diego State psychology professor Jean Twenge, author of the forthcoming iGen: The 10 Trends Shaping Today’s Young People–and the Nation. “They know on Snapchat, ‘If I make a funny face or use one of the filters and make myself look like a dog, it’s going to disappear. It won’t be something permanent my enemies at school can troll me about.'”

The Economist also has a kindly explanation for baffled oldies.

I’m a student – don’t stress me out

Lovely passage in David French’s review of Tyler Cowen’s new book — The Complacent Class:

This weekend, my wife and oldest daughter visited her first-choice college, the University of Tennessee. There was one curious moment in an otherwise wonderful weekend. The tour guide noted that the university was there to help students get through the trauma of exams. It brought in masseuses to massage away the stress. It rolls out a sheet of paper, passes out crayons, and lets the students express their rage against algebra. Oh, and it vowed to bring in puppies, so students could cuddle something cute to take the edge off their anxiety.

Politeness, not willpower.

From an essay on disconnecting by Philip Reid:

Of course, it’s true that cellphones can be used responsibly. We can shut them off or simply ignore the incoming text. But this takes extraordinary willpower. According to a recent Pew survey, 82% of Americans believe that cellphone use in social situations more often hurts than helps conversation, yet 89% of cell owners still use their phones in those situations.

Not me, though. Which is why people who are trying to get in touch with me during the day sometimes find me infuriating. I very rarely take a call or reply to a text when I’m with people. It’s not so much a matter of high principle: I just think it’s incredibly rude to privilege a device over another human being.

Should robots be taxed

This morning’s Observer column:

The problem with the future is that it’s unknowable. But of course that doesn’t stop us trying to second-guess it. At the moment, many people – and not just in the tech industry – are wondering about the impact of automation on employment. And not just blue-collar employment – the kind of jobs that were eliminated in the early phase of automating car production, for instance – but also the white-collar jobs that hitherto seemed secure…

Read on

At the end of the piece I mentioned (and applauded) Bill Gates’s suggestion that robots should be taxed — just as human workers are — to enable the social and human costs of automation to be mitigated. There’s a thoughtful Schumpeter column in this week’s Economist arguing that this might not be such a good idea.

“A robot is a capital investment”, writes the Schumpeter columnist,

like a blast furnace or a computer. Economists typically advise against taxing such things, which allow an economy to produce more. Taxation that deters investment is thought to make people poorer without raising much money. But Mr Gates seems to suggest that investment in robots is a little like investing in a coal-fired generator: it boosts economic output but also imposes a social cost, what economists call a negative externality. Perhaps rapid automation threatens to dislodge workers from old jobs faster than new sectors can absorb them. That could lead to socially costly long-term unemployment, and potentially to support for destructive government policy. A tax on robots that reduced those costs might well be worth implementing, just as a tax on harmful blast-furnace emissions can discourage pollution and leave society better off.

The biggest problem with the Gates proposal, he goes on, is not that automation is happening but that it is not happening quicker.

Mr Gates worries, understandably, about a looming era of automation in which machines take over driving or managing warehouses. Yet in an economy already awash with abundant, cheap labour, it may be that firms face too little pressure to invest in labour-saving technologies. Why refit a warehouse when people queue up to do the work at the minimum wage? Mr Gates’s proposal, by increasing the expense of robots relative to human labour, might further delay an already overdue productivity boom.

And even if automation speeds up, the share of income attributed to the machines might also decline quickly — or at any rate follow the historic trend.

A new working paper by Simcha Barkai, of the University of Chicago, concludes that, although the share of income flowing to workers has declined in recent decades, the share flowing to capital (ie, including robots) has shrunk faster. What has grown is the markup firms can charge over their production costs, >ie, their profits. Similarly, an NBER working paper published in January argues that the decline in the labour share is linked to the rise of “superstar firms”. A growing number of markets are “winner takes most”, in which the dominant firm earns hefty profits.

Large and growing profits are an indicator of market power. That power might stem from network effects (the value, in a networked world, of being on the same platform as everyone else), the superior productive cultures of leading firms, government protection, or something else. Waves of automation might necessitate sharing the wealth of superstar firms: through distributed share-ownership when they are public, or by taxing their profits when they are not. Robots are a convenient villain, but Mr Gates might reconsider his target; when firms enjoy unassailable market positions, workers and machines alike lose out.: the owners of robots have to be taxed so that the increases in productivity (and profits) that they enable is redistributed.

Thus by a roundabout route the Economist columnist reaches the right conclusion — although even then it’s a rather weaselly concession: waves of automation might necessitate sharing the wealth of superstar firms. Might??? Gates’s proposal may have been motivated by a shrewd conviction that, in this neoliberal world, redistributive taxation of that kind is never going to happen. Taxing robots like workers is, in contrast, something that even the dumbest government can organise.

LATER Yanis Varoufakis isn’t impressed by the Gates proposal.

Common sense on AI

Interesting responses from Stuart Russell in an World Economic Forum interview:

Are robots taking over the world?

SR: There are three timescales and three versions of this question, and the answers are “Not if I can help it”, “Quite possibly, but hopefully in a good way” and “We would be crazy to be complacent on this issue”. In the near term, autonomous weapons in the hands of unpleasant humans are a real threat, the UN is working (slowly) towards a treaty banning them, and our council has been active in building support for a treaty within the profession and in the media. In the medium term, will robots take away all of our jobs? Some experts say yes, and economists recommend more unemployment insurance as the solution. Better ideas wanted!

But the real world-changing questions are further off, when, after several intrinsically unpredictable breakthroughs, we have human-level or superhuman AI. See, for example, Elon Musk’s comment that superintelligent AI poses the greatest existential threat to the survival of the human race. His point was that regulatory oversight at a national and international level is needed to responsibly develop technology. In my view it’s too soon to start designing regulations – on equations?? – but not too soon to start solving the technical questions of how to maintain absolute control over increasingly intelligent machines.

Yep.

Kenneth Arrow, RIP

The great economist has passed away, at the age of 95. I liked this story from the NYT obituary:

Professor Arrow was widely hailed as a polymath, possessing prodigious knowledge of subjects far removed from economics. Eric Maskin, a Harvard economist and fellow Nobel winner, told of a good-natured conspiracy waged by junior faculty to get the better of Professor Arrow, even if artificially. They all agreed to study the breeding habits of gray whales — a suitably abstruse topic — and gathered at an appointed date at a place where Professor Arrow would be sure to visit.

When, as expected, he showed up, they were talking out loud about the theory by a marine biologist — last name, Turner — which purported to explain how gray whales found the same breeding spot year after year. As Professor Maskin recounted the story, “Ken was silent,” and his junior colleagues amused themselves that they had for once bested their formidable professor.

Well, not so fast.

Before leaving, Professor Arrow muttered, “But I thought that Turner’s theory was entirely discredited by Spencer, who showed that the hypothesized homing mechanism couldn’t possibly work.”