Capitalism fails: and yet the Right continues to thrive. Why?

Very good column by Fintan O’Toole in today’s Irish Times.

For the last 30 years, the Finnish president has been a social democrat. This time, the social democratic candidate, a former prime minister, got just 6.7 per cent of the vote. The conservative candidate won in a landslide, with 62 per cent.

As ever, there were specific local factors at work. But the Finnish election was also entirely consistent with a much larger pattern: the eclipse of the traditional mainstream European left. Or, to put it the other way around, the extraordinary dominance of the conservative right in the midst of a profound crisis of neo-liberal capitalism.

Almost everywhere you look, social democrats are being punished for the failure of right-wing orthodoxies. Europe desperately needs a counter-balance to the one-track mind of monetarism. Ireland desperately needs a European Union that rediscovers the old left-of-centre values of solidarity, social justice and the common good. But social democracy is showing all the resilience of a paper hat in a hailstorm.

Since the collapse of Lehman Brothers in 2008, there have been 25 elections in EU countries. By my count, social democratic parties won five of them. But things have been much worse than even this miserable performance suggests. The social democratic victory in Portugal was short-lived. The Greek socialist government has been replaced by a technocratic coalition.

Any sane observer, he continues, “looking at these results without knowing the broader circumstances would conclude that neo-liberal capitalism was thriving and that right-of-centre European parties were proving themselves to be paragons of economic and political management.”

He’s right. So what happened to the Left?

O’Toole blames Tony Blair. Under the spell of his electoral dominance in Britain, the Left came to believe that right-wing economics would create the wealth and left-wing politics would redistribute it.

Deregulated financial markets would generate vast wealth for unproductive individuals – but that was okay because social democratic governments would cream some off the top to invest in health, education and the alleviation of the inevitable poverty. The flaws in the plan are now obvious, even to Mandelson: growing inequality would prove to be economically as well as socially corrosive; many of the filthy rich didn’t actually “pay their taxes”; and deregulated financial markets created giant Ponzi schemes that were certain to collapse.

Yep. But that still leaves unanswered the question of why, in the face of such a crisis, are there so few persuasive alternative ideas for how to move on.

How to handle 15 billion page views a month

Ye Gods! Just looked at the stats for Tumblr.

500 million page views a day

15B+ page views month

~20 engineers

Peak rate of ~40k requests per second

1+ TB/day into Hadoop cluster

Many TB/day into MySQL/HBase/Redis/Memcache

Growing at 30% a month

~1000 hardware nodes in production

Billions of page visits per month per engineer

Posts are about 50GB a day. Follower list updates are about 2.7TB a day.

Dashboard runs at a million writes a second, 50K reads a second, and it is growing.

And all this with about twenty engineers!

Web design and page obseity

My Observer column last Sunday (headlined “Graphics Designers are Ruining the Web”) caused a modest but predictable stir in the design community. The .Net site published an admirably balanced round-up of comments from designers pointing out where, in their opinions, I had got things wrong. One (Daniel Howells) said that I clearly “had no exposure to the many wonderful sites that leverage super-nimble, lean code that employ almost zero images” and that I was “missing the link between minimalism and beautiful designed interfaces.” Designer and writer Daniel Gray thought that my argument was undermined “by taking a shotgun approach to the web and then highlighting a single favoured alternative, as if the ‘underdesigned’ approach of Peter Norvig is relevant to any of the other sites he discusses”.

There were several more comments in that vein, all reasonable and reasoned — a nice demonstration of online discussion at its best. Reflecting on them brought up several thoughts:

  • The Columnist’s Dilemma: writing a column about technology for a mainstream newspaper means that one is always trying to balance the temptation to go into technical detail against the risk of losing the non-technical reader. Sometimes I get the balance wrong. In this particular case I thought that getting into the pros and cons of, say, using Javascript to enhance usability, would obscure the main point I was trying to make, which is that there is an epidemic of obesity in web pages, and that that has some downsides.
  • There’s the question of what are columnists for? I remember something that Alan Rusbridger, the Editor of the Guardian, said when asked why he employed columnists like Simon Jenkins who annoyed the (mainly left-of-centre) readers of the paper. The essence of Rusbridger’s response, as I remember it, was that he needed to avoid creation of an echo-chamber — a publication in which readers only received views with which they agreed. Grit in the oyster, if you like. So perhaps one of the responsibilities of a columnist is to be provocative.
  • One thing I wish I had mentioned is that it isn’t just designers who are responsible for data-intensive web pages: it’s the slot-in advertising that is often the culprit. And here the responsibility for obesity lies with e-commerce. And here the column links to an earlier one, picking up Evgeny Morozov’s point about the way in which the Web has moved from being a cabinet of curiosities to an endless shopping mall.
  • The most common response to the column, though, was a casual shrug. So what if web pages are getting bigger and bigger? Network bandwidth will increase to meet the demand — and this may be a good thing: look at the way the demands of desktop publishing and, later, image and video editing pushed the development of personal computing technology. And of course there’s something in that argument: without the constant pressure to push the envelope, technology stagnates. The problem with that argument, however, is that for many Internet users bandwidth is not infinite. I don’t know what proportion of UK users in rural areas, for example, have a landline broadband connection that generally exceeds 2Mbps, but it sure as hell isn’t 100. And as more and more people access the Net via mobile connections, then bandwidth constraints really matter, and will continue to do so for the foreseeable future.
  • Thanks to Seb Schmoller for the .Net link.

    Want to pay lower interest rates on bonds? Get connected.

    Sitting next to me at last week’s lecture by Simon Hampton of Google was David Cleevely who in addition to being a successful entrepreneur is also the Founding Director of the Cambridge Centre for Science and Policy. At one point, Simon put up a slide showing the percentage of GDP contributed by the Internet in a wide range of European countries. As the animation flashed by David whispered to me: “wonder if there’s a correlation between those percentages and bond yields?” After the lecture, he cycled off purposefully into the gathering gloom. I suspected that he was Up To Something.

    He was. Yesterday he and his son Matthew (who’s currently doing a PhD at Imperial College Business School) published this chart on their blog.

    They plotted 10-year governmental bond rates against the Boston Consulting Group’s measure of “e-intensity” for countries in the Eurozone group.

    Member states which have not had the capacity to adopt and develop the internet and ecommerce are those with skyrocketing risk premiums on their governments’ debt. These states form a distinct group: Portugal, Italy, Spain, Ireland and Greece (which is literally off the scale) – all with high debt premiums and an e-Intensity score of well under 100.

    There are two other groups: those with high e-Intensity and low government bond rates such as Germany, Denmark and the UK, and a middle group (Belgium, Austria and France) with lower e-Intensity (100-120) and raised bond rates of 3-4%.

    David and Matthew see two possible (and possibly non-exclusive) explanations for this:

    1. Low e-Intensity indicates underlying structural problems: countries with high e-Intensity are those which have invested in modern processes, improved productivity and benefit from strong institutions. These are the countries that have lower borrowing costs, as they are best placed to grow their economies in the future.

    and/or

    2. e-Intensity (or what it represents) is a fundamental capability: countries which use the internet intensively can respond more flexibly to shocks and crises, instead of being weighed down by cumbersome 20th century processes and institutions.

    So…

    if you can get a country to invest in, use, and compete on the internet, then you must have either eliminated or minimised any underlying structural problems, or created a flexible and robust economy, or both.

    Policymakers, please note.

    From web pages to bloatware

    This morning’s Observer column.

    In the beginning, webpages were simple pages of text marked up with some tags that would enable a browser to display them correctly. But that meant that the browser, not the designer, controlled how a page would look to the user, and there’s nothing that infuriates designers more than having someone (or something) determine the appearance of their work. So they embarked on a long, vigorous and ultimately successful campaign to exert the same kind of detailed control over the appearance of webpages as they did on their print counterparts – right down to the last pixel.

    This had several consequences. Webpages began to look more attractive and, in some cases, became more user-friendly. They had pictures, video components, animations and colourful type in attractive fonts, and were easier on the eye than the staid, unimaginative pages of the early web. They began to resemble, in fact, pages in print magazines. And in order to make this possible, webpages ceased to be static text-objects fetched from a file store; instead, the server assembled each page on the fly, collecting its various graphic and other components from their various locations, and dispatching the whole caboodle in a stream to your browser, which then assembled them for your delectation.

    All of which was nice and dandy. But there was a downside: webpages began to put on weight. Over the last decade, the size of web pages (measured in kilobytes) has more than septupled. From 2003 to 2011, the average web page grew from 93.7kB to over 679kB.

    Quite a few good comments disagreeing with me. In the piece I mention how much I like Peter Norvig’s home page. Other favourite pages of mine include Aaron Sloman’s, Ross Anderson’s and Jon Crowcroft’s. In each case, what I like is the high signal-to-noise ratio.

    How things change…

    From Quentin’s blog.

    There’s a piece in Business Insider based on an interesting fact first noted by MG Siegler. It’s this:

    Apple’s iPhone business is bigger than Microsoft

    Note, not Microsoft’s phone business. Not Windows. Not Office. But Microsoft’s entire business. Gosh.

    As the article puts it:

    The iPhone did not exist five years ago. And now it’s bigger than a company that, 15 years ago, was dragged into court and threatened with forcible break-up because it had amassed an unassailable and unthinkably profitable monopoly.

    Wow! It seems only yesterday when Microsoft was the Evil Empire.

    Regulating the Net

    This evening I went to an interesting lecture on “The challenges of regulating the Internet” given by Simon Hampton, who is Google’s Director of Public Policy for Northern Europe. He was, he insisted, speaking only “in a personal capacity” and the audience, for the most part, took him at his word. His thesis was that in the technology business the greatest challenge is “the transition from scarcity to abundance” and that policy-makers haven’t taken this transition on board.

    The metaphor he used to communicate abundance was the hoary story of the guy who invents the game of chess and who, when asked by the Emperor what he would like as a reward requests one grain of rice on the first square, two on the second, four on the third, etc. You know the story: according to Hampton by the time you get to the 64th square the amount of rice needed is the size of Everest. At which point my neighbour, David Cleevely, nudged me and said that he had worked it out and it was enough rice to cover the entire surface of the earth to a depth of nine inches! But it turned out that, for Hampton, the metaphor also has a chronological dimension: we’re just about half-way through the chessboard.

    As befits someone who spends a lot of time in Brussels arguing with the anti-trust division of the European Commission, he has a pretty jaundiced view about what goes on there. But he made a good point, namely that the EU finds it easiest to regulate in new areas simply because regulating in existing areas means dealing with the reluctance of member states to change their existing laws. The result is that the Commission tends to to move too quickly to legislation (in the form of Directives) in emerging areas. Examples: the e-Money Directive; the e-Signature Directive; and, now, the Privacy Directive. Plus, of course, the idiocy of applying to the Net concepts designed for regulating TV broadcasters.

    He then went on to list the four big companies that have “embraced abundance” — Apple, Amazon, Google and Facebook. They all operate on massive scale and have an interest in a bigger, freer Internet — though they have only recently woken up to the necessity of lobbying for this in the corridors of power (which is why the campaign against SOPA was so significant). In this context, Hampton claimed, size really matters — which is why moving into the second half of the chessboard is so significant. “The larger the haystack, the easier it is to find the needle.” (Not sure I followed his logic here.)

    He was interesting on the subject of lobbying — which of course is a large part of his job and he defined it as trying to persuade legislators and policy-makers that their concept of the public interest should be realigned with those of industrial lobbies. People in declining industries tend to scream, and thereby gain public — and media and legislators’ — attention; in contrast, people in new, growing industries are too busy getting on with it — which probably explains why the big Internet companies were so slow to get their lobbying act together in Washington.

    Finally, Hampton moved to the question of how to rethink policy-making in the context of abundance. In his opinion, there’s no need to change the broad objectives of public policy. The key switch that is needed is for policy-makers to “embrace abundance”.

    What would that mean in practice? He suggested three general principles:

  • Don’t protect industries based on scarcity. This means accepting the two sides of the Schumpeterian wave of “creative destruction”. To date, our legislatures have been hopeless at this: they appear to be mainly preoccupied with enabling the interests of the past to constrain the future.
  • Work with crowdsourcing. This is an interesting idea. The example Hampton cited involved contrasting the way broadcasting regulators deal with unacceptable content compared with the way YouTube effectively crowdsources the detection of unacceptable content by inviting its users to flag clips that they think are offensive. Also the way eBay engenders trust with its reputation system; or the way TripAdvisor invites its users to “rate the raters”.
  • Reappraise the power of data. Mining the avalanche of data is, Hampton thinks, the beginning of wisdom in an age of abundance. (Yes, but it raises privacy nightmares.)
  • In the Q&A the main topics were (a) the cluelessness of legislators in relation to the Internet, and (b) concerns about privacy (hardly surprising given this). I made the point that legislators have always been clueless, so there’s nothing new here. (I quoted Bertrand Russell’s argument about why an MP couldn’t be more stupid than his constituents — “because the more stupid he is, the more stupid they are to elect him”.) The point is that in a representative democracy, Members of Parliament should not be expected to be knowledgeable about everything. It’s Parliament, as an institution, that needs to be knowledgeable — which is why we need evidence-based policy-making in relation to Internet regulation, intellectual property, etc. And, to date, we’ve had very little of that.

    So the battle for an open Internet continues.

    Form vs function in journalism

    Just watched, despairingly, Newsnight on BBC2 grappling with the “death of the newspaper”. The peg for this feeble item was the arrival of the Digger at Luton airport.  He has, it seems, flown the Atlantic in order to reassure his serfs and placemen at the Sun (a newspaper) that he is not going to close it down.** The Newsnight item followed the usual recipe: a short film report followed by a studio ‘discussion’ with three guests: a former tabloid editor, a doughty female hack (Joan Smith) and a young gel in impossibly high heels who is the UK head of the Huffington Post, a parasitic online creation that feeds on proper journalism.

    The really annoying thing about the discussion was the way it failed to distinguish between format and function. The thing we need to preserve is not the newspaper (a form which was the product of a technological accident and a particular set of historical circumstances) but the function (provision of free, independent and responsible journalism). Once upon a time, publishing in print was the only way to ensure that the product of the function reached a public audience. But those days may be ending. The organisations traditionally known as “newspapers” need to transform themselves into journalistic outfits that produce a range of outputs, one of which — but only one of which — may be a printed paper.

    The other thing that those who run newspapers need to realise is that digital technology implies businesses that earn much lower margins than analogue businesses did. In the old days, newspapers were often licences to print money. (The Digger’s Sun still is.) But in every industry where digital technology has taken hold, margins have shrunk. Could you support a newsroom of 120 journalists — plus all the material and distribution expenses that go with producing a print product — on the revenues that a newspaper website currently earns? Answer: no. But could you support a newsroom with 80 journalists and a purely online offering with the same revenues? Answer: possibly — provided you were prepared to settle for a modest return (say 5%) on investment.

    **Later: Murdoch announced that he would be setting up a Sunday Sun — a continuation of the Screws of the World by other means. This was hailed by the mainstream media as a bold, defiant and possibly inspired tactic. I’m not so sure: it’s just possible that the Murdoch brand is now so toxic that the new gamble won’t wash. An alternative reading is that the old guy is finally losing his marbles. If the US laws against corrupt payments are triggered by the most recent developments (the arrest of Sun journalists on suspicion of making such payments to public officials) then the supine directors and shareholders of News International may finally be moved to, er, move.