What’s significant about Wikipedia

This morning’s Observer column:

Since its inception, it’s been the butt of jokes, a focus for academic ire and a victim of epistemological snobbery. I remember one moment when the vice-chancellor of a top university made a dismissive remark about Wikipedia, only to have a world-leading chemist in the audience icily retort that the pages on his particular arcane speciality were the most up-to-date summary currently available anywhere – because he wrote them. And this has been my experience; in specialist areas, Wikipedia pages are often curated by experts and are usually the best places to gain an informed and up-to-date overview.

Because Wikipedia is so vast and varied (in both range and quality), the controversies it engenders have traditionally been about its content and rarely about its modus operandi and its governance. Which is a pity, because in some ways these are the most significant aspects of the project. The political events of the last two years should have alerted us to the fact that Wikipedia had to invent a way of tackling the problem that now confronts us at a global level: how to get at some approximation to the truth…

Read on

Deep-fat data frying

This morning’s Observer column:

The tech craze du jour is machine learning (ML). Billions of dollars of venture capital are being poured into it. All the big tech companies are deep into it. Every computer science student doing a PhD on it is assured of lucrative employment after graduation at his or her pick of technology companies. One of the most popular courses at Stanford is CS229: Machine Learning. Newspapers and magazines extol the wonders of the technology. ML is the magic sauce that enables Amazon to know what you might want to buy next, and Netflix to guess which films might interest you, given your recent viewing history.

To non-geeks, ML is impenetrable, and therefore intimidating…

Read on

So what was Google smoking when it bought Boston Dynamics?

This morning’s Observer column:

The question on everyone’s mind as Google hoovered up robotics companies was: what the hell was a search company doing getting involved in this business? Now we know: it didn’t have a clue.

Last week, Bloomberg revealed that Google was putting Boston Dynamics up for sale. The official reason for unloading it is that senior executives in Alphabet, Google’s holding company, had concluded (correctly) that Boston Dynamics was years away from producing a marketable product and so was deemed disposable. Two possible buyers have been named so far – Toyota and Amazon. Both make sense for the obvious reason that they are already heavy users of robots and it’s clear that Amazon in particular would dearly love to get rid of humans in its warehouses at the earliest possible opportunity…

Read on

Economics in an age of abundance

Brad de Long is one of my favourite bloggers — and economists. Here he is brooding on a problem that once preoccupied Keynes and is likely to surface again, if we do crack the problem of increasing productivity with robotics — and displacing employment. Sample:

There is no shortage of problems to worry about: the destructive power of our nuclear weapons, the pig-headed nature of our politics, the potentially enormous social disruptions that will be caused by climate change. But the number one priority for economists – indeed, for humankind – is finding ways to spur equitable economic growth.
But job number two– developing economic theories to guide societies in an age of abundance – is no less complicated. Some of the problems that are likely to emerge are already becoming obvious. Today, many people derive their self-esteem from their jobs. As labor becomes a less important part of the economy, and working-age men, in particular, become a smaller proportion of the workforce, problems related to social inclusion are bound to become both more chronic and more acute.

Such a trend could have consequences extending far beyond the personal or the emotional, creating a population that is, to borrow a phrase from the Nobel-laureate economists George Akerlof and Robert Shiller, easily phished for phools. In other words, they will be targeted by those who do not have their wellbeing as their primary goal – scammers like Bernie Madoff, corporate interests like McDonalds or tobacco companies, the guru of the month, or cash-strapped governments running exploitative lotteries.
Problems like these will require a very different type of economics from the one championed by Adam Smith. Instead of working to protect natural liberty where possible, and building institutions to approximate its effects elsewhere, the central challenge will be to help people protect themselves from manipulation.

Read more at

Technology and the future of work

Our Technology and Democracy research project had a terrific talk this afternoon by Mike Osborne of the Oxford Martin School about the research that he and Carl Frey published in “The future of employment: how susceptible are jobs to computerisation?”.

That paper is impressive in lots of ways. Unlike many academic research reports, for example, it’s written in pellucid prose. And it’s historically informed — which is unusual in technology publications: the authors know that the issue of the impact of machinery on jobs goes back a long, long way — at least to Elizabethan times with William Lee and his request for a patent on his stocking frame loom.

But most importantly, the Frey-Osborne study is the best analysis to date of what we in our project regard as one of the most significant puzzles of our time: namely what does the combination of infinite computational power, big data, machine learning and advanced robotics mean for our future? Or, to quote the title of Norbert Wiener’s book, what will constitute “the human use of human beings” in a digital future?

What preoccupies us is the question of whether we now stand on a hinge of history. Are there things about digital technologies which make our situation and prospects different from the disruptions that our ancestors faced when confronted with the seminal general-purpose technologies of the past? Can we say with any confidence that this time it’s different?

Mike’s presentation provoked lots of thoughts…

The first is the objection often made by historians and economists who argue say that apocalyptic concerns about digital technology are just outbreaks of a-historical hysteria. Historically, they say, technological progress has always had two conflicting impacts on employment. One is the overtly destructive impact — the leading edge of the Schumpeterian wave, if you like. The other is the capitalisation effect, as companies start to enter industries where productivity is relatively high, leading to the expansion of employment in these new or revitalised industries. So, according to the sceptics, although automation definitely taketh away, it also giveth.

But if I’ve understood Mike and Carl’s work correctly, this time it might be different, for two reasons.

  • One is that whereas automation historically served to eliminate manual and/or highly routinised tasks, the new digital technologies mean that automation is remorselessly moving into work domains that have traditionally been seen as cognitive and non-routine.

  • The second is that what happening now is what Brian Arthur called “combinatorial innovation”, which is basically the network effect applied to technological innovation. This means that the pace of innovation is increasing exponentially, which in turn means that our traditional capacity to transition into employment in new areas is going to be outpaced by the pace of change. In which case, the life-chances of a lot of human beings could be undermined or destroyed.

Which leads to a final thought, namely that in the end this will have to come down to politics. Mike and Carl’s analysis is not a deterministic one — they don’t imply that the job-destruction that they think could happen will happen. Decisions about whether to deploy these technologies will, in the end, be made by people –- the owners of capital — not by machines. And if there’s no element of societal control in all this, then the clear implication is that Piketty’s rule about the returns from capital generally outrunning the returns from employment will be turbocharged, with predictable consequences for inequality.

But of course, it doesn’t have to be like that. The economic and productivity gains that result from these technologies could be used for different purposes other than giving even more to those who already have. And that brings to mind Keynes’s famous essay on “The Economic Possibilities for our Grandchildren” in which he saw the possibility that, through technology-driven productivity gains, man “could for the first time since his creation … be faced with his real, his permanent problem — how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well”.

Only politics can ensure that that agreeable prospect comes to pass, however. This isn’t just about technology, in other words.

And now here’s the really strange thing: in all the sturm und drang of our recent election campaign, the implications of computerisation for employment weren’t mentioned once. Not once.

So can software handle your emotions?

Interesting piece by Zeynep Tufecki in the NYT. She starts with the news that

A robot with emotion-detection software interviews visitors to the United States at the border. In field tests, this eerily named “embodied avatar kiosk” does much better than humans in catching those with invalid documentation. Emotional-processing software has gotten so good that ad companies are looking into “mood-targeted” advertising, and the government of Dubai wants to use it to scan all its closed-circuit TV feeds.

What this means is that

Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. In applications around the world, software is being used to predict whether people are lying, how they feel and whom they’ll vote for.

To crack these cognitive and emotional puzzles, computers needed not only sophisticated, efficient algorithms, but also vast amounts of human-generated data, which can now be easily harvested from our digitized world. The results are dazzling. Most of what we think of as expertise, knowledge and intuition is being deconstructed and recreated as an algorithmic competency, fueled by big data.

But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis.