The intellectual benefits of bullet trains

From “The Role of Transportation Speed in Facilitating High Skilled Teamwork”

High skilled workers gain from face to face interactions. If the skilled can move at higher speeds, then knowledge diffusion and idea spillovers are likely to reach greater distances. This paper uses the construction of China’s high speed rail (HSR) network as a natural experiment to test this claim. HSR connects major cities, that feature the nation’s best universities, to secondary cities. Since bullet trains reduce cross-city commute times, they reduce the cost of face-to-face interactions between skilled workers who work in different cities. Using a data base listing research paper publication and citations, we document a complementarity effect between knowledge production and the transportation network. Co-authors’ productivity rises and more new co-author pairs emerge when secondary cities are connected by bullet train to China’s major cities.

Which attracted this comment from someone using the handle “Pedantic Blithering Idiot”:

In the famous paraphrasing of Max Planck- science advances funeral by funeral. To overturn old ideas it is often necessary for new ideas to have an incubation period among a relatively isolated group of highly talented people. If all the universities of the world were to relocate to Amsterdam the initial effect might be positive but it seems probable that a kind of group-think consensus would form up around old ideas and stagnate. (Is that finally happening in Silicon Valley?) The balance between concentration and dispersal of talent is complex involving many factors on a case by case basis. Many have tried to recreate the Silicon Valley success in some form or another, no one quite succeeds as well. In cultures where there is more conformity, where the nail that stands out gets hammered down, the tendencies toward group-think stagnation is likely to be greater which would suggest advising a balance favoring dispersal- small clumps of isolated groups, might work better for scientific advancement. In the short run I’d expect an increase in technical expertise as China finishes playing catch-up in technology (if it hasn’t already) and distributes technical knowledge more thoroughly throughout it’s regions, but it wouldn’t surprise me terribly if the long term effect of high-speed rail in China is negative for science production, and then for innovation and patents.

HT to Tyler Cowen

Why fake news will be hard to fix — it’s the users, stoopid

Here’s a telling excerpt from a fine piece about Facebook by Farhad Manjoo:

The people who work on News Feed aren’t making decisions that turn on fuzzy human ideas like ethics, judgment, intuition or seniority. They are concerned only with quantifiable outcomes about people’s actions on the site. That data, at Facebook, is the only real truth. And it is a particular kind of truth: The News Feed team’s ultimate mission is to figure out what users want — what they find “meaningful,” to use Cox and Zuckerberg’s preferred term — and to give them more of that.

This ideal runs so deep that the people who make News Feed often have to put aside their own notions of what’s best. “One of the things we’ve all learned over the years is that our intuition can be wrong a fair amount of the time,” John Hegeman, the vice president of product management and a News Feed team member, told me. “There are things you don’t expect will happen. And we learn a lot from that process: Why didn’t that happen, and what might that mean?” But it is precisely this ideal that conflicts with attempts to wrangle the feed in the way press critics have called for. The whole purpose of editorial guidelines and ethics is often to suppress individual instincts in favor of some larger social goal. Facebook finds it very hard to suppress anything that its users’ actions say they want. In some cases, it has been easier for the company to seek out evidence that, in fact, users don’t want these things at all.

Facebook’s two-year-long battle against “clickbait” is a telling example. Early this decade, the internet’s headline writers discovered the power of stories that trick you into clicking on them, like those that teasingly withhold information from their headlines: “Dustin Hoffman Breaks Down Crying Explaining Something That Every Woman Sadly Already Experienced.” By the fall of 2013, clickbait had overrun News Feed. Upworthy, a progressive activism site co-founded by Pariser, the author of “The Filter Bubble,” that relied heavily on teasing headlines, was attracting 90 million readers a month to its feel-good viral posts.

If a human editor ran News Feed, she would look at the clickbait scourge and make simple, intuitive fixes: Turn down the Upworthy knob. But Facebook approaches the feed as an engineering project rather than an editorial one. When it makes alterations in the code that powers News Feed, it’s often only because it has found some clear signal in its data that users are demanding the change. In this sense, clickbait was a riddle. In surveys, people kept telling Facebook that they hated teasing headlines. But if that was true, why were they clicking on them? Was there something Facebook’s algorithm was missing, some signal that would show that despite the clicks, clickbait was really sickening users?

If you want to understand why fake news will be a hard problem to crack, this is a good place to start.

The technocratic delusion

“This may upset some of my students at MIT, but one of my concerns is that it’s been a predominantly male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.”

Joi Ito, Director of the MIT Media Lab