Internet Explorer RIP

This morning’s Observer column:

Let’s spool back a bit – to 1993. By then, the internet was roughly 10 years old, but for its first decade had been largely unknown to anyone other than geeks and computer science researchers. Two years earlier, Tim Berners-Lee had created and released the world wide web onto the internet, but initially no one noticed. Then in the spring of 1993, Marc Andreessen and Eric Bina released Mosaic – the first graphical browser – and suddenly the “real world” realised what the internet was for, and clamoured to get aboard.

But here’s the strange thing: Microsoft – by then the overwhelmingly dominant force in the computing world – failed to notice the internet. One of Bill Gates’s biographers, James Wallace, claimed that Microsoft didn’t even have an internet server until early in 1993, and that the only reason the company set one up was because Steve Ballmer, Gates’s second-in-command, had discovered on a sales trip that most of his big corporate customers were complaining that Windows didn’t have a “TCP/IP stack” – ie, a way of connecting to the internet. Ballmer had never heard of TCP/IP. “I don’t know what it is,” he shouted at subordinates on his return to Seattle. “I don’t want to know what it is. But my customers are screaming about it. Make the pain go away.”

But even when Microsoft engineers built a TCP/IP stack into Windows, the pain continued…

Read on

Osborne’s real plan

Nice blog post by Simon Wren-Lewis arguing that the obsession with deficit-reduction is actually just a smokescreen for shrinking the state. Sample:

However perhaps we should have taken the chancellor at his word when he says that there has been no change to his long-term plan. The mistake was to misunderstand what this plan was. In reality it may have had nothing to do with the deficit, but instead was all about shrinking the size of the state over a ten-year period.

The problem the Conservatives faced in 2010 was that there was no public appetite for a smaller state. Surveys continued to show many more people wanted higher government spending and taxes than wanted the opposite (with a large percentage wanting neither). A focus on the government deficit presented them with an ideal opportunity to achieve a smaller state by the back door.

So…

The only problem with this strategy is that, as we saw in the coalition’s first two years, it would seriously damage the economy, just as Keynes would have predicted. The chancellor has never rejected Keynesian analysis, so perhaps he was well aware of this. So the plan may have always included a temporary pause to austerity before the election, giving the economy time to recover and the chancellor scope for what he hoped would be election winning tax giveaways.

So the real long-term plan was an initial two years of sharp cuts to public spending and the deficit, to be followed by budgets involving tax cuts that would allow growth to resume but rather less deficit reduction. If this combination was enough to win the subsequent election, the recipe could be repeated all over again. Indeed, this is what George Osborne’s post-2015 plans look like. All done in the name of deficit reduction, when the real aim is to reduce the size of the state.

For this plan to work, you need one extra ingredient: a compliant media that buys into the idea that deficit reduction is all important, and that recent growth somehow vindicates the earlier austerity.

Well, we certainly have that ingredient — a compliant media.

Privacy: who needs it? Er, Zuckerberg & Co

Who said irony was dead? The tech zillionaires are so blasé about how their users are relaxed about privacy and what is quaintly called “sharing”. But they are not at all blasé when it comes to sharing information about themselves. Google’s Exec Chairman, Eric Schmidt, for example, believes that “privacy is dead”, but went apeshit when some enterprising journalist dug up lots of personal information about him simply by using, er, Google.

And then there’s young Zuckerberg, the Facebook boss, who is likewise relaxed about other people’s privacy, but paranoid about his own. See, for example, this Forbes report on his need to buy up an entire neighbourhood block in palo Alto to ensure that he isn’t overlooked:

So much for Zuckerberg only making a big digital footprint. Now the online empire maker owns nearly an entire neighborhood block, just because he can.

According to property records, the Facebook CEO has spent $30 million over the past year buying the pricy homes of four of his neighbors. It’s within his right, and within his budget, especially with Facebook stock finally starting to march up in value after its controversial and lackluster IPO.

Now the NYT is reporting that he’s updating a house in San Francisco, where even he might not be able to persuade his neighbours to clear out. But builders and tradesmen working on this nouveau palace find that they have to sign Non-Disclosure Agreements lest the world should know which kind of bidet the infant zillionaire favours.

So what kind of time will you get from the iWatch?

This morning’s Observer column:

A few months ago I bought a “smartwatch”. I did so because there was increasing media hype about these devices and I don’t write about kit that I haven’t owned and used in anger. The model I chose was a Pebble Steel, for several reasons: it was originally funded by a Kickstarter campaign; a geek friend already had one; and, well, it looked interesting. Now, several months on, I am back to wearing my old analogue watch. The Pebble experiment turned out to be instructive. The watch was well made and well presented. It had reasonable battery life and the software was easy to install on my iPhone. The bluetooth link was reliable. Its timekeeping was accurate, and it could display the time in a variety of ways, some of them humorous. One could download a variety of virtual watch-faces, and so on.

So why is it not still on my wrist? Well, basically most of its “features” were of little or no actual use to me; and for much of the time, even apps that I would have found useful – such as having the watch vibrate when a text message arrived – turned out to be flaky: sometimes they worked; more often they didn’t. Which of course led to the thought that if anybody can make the smartwatch into a successful consumer product that “just works” it would be Apple. And indeed it was amusing to note how many people who, upon seeing the Pebble on my wrist, would ask me: “Is that the new Apple Watch?”

Well, now the Apple Watch is here and we will find out if the world really was waiting for a proper smartwatch to arrive…

Read on

How politics gets hollowed out

From Brewster Kahle’s blog:

A recent paper from Princeton evaluated of over 1700 federal government policy decisions made in the last 30 years and found “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.” Therefore, according to this research, the vast majority of the population has little or no say in how the federal government makes policy decisions. Similarly discouraging is the economic analysis over the last 30 years that found that the increase in American wealth went to only the wealthiest 1% of the population, with the other 99% essentially staying even. Therefore, there has not been equal opportunity for economic success in the United States for the vast majority of the population.

Getting to bedrock

This morning’s Observer column:

The implication of these latest revelations is stark: the capabilities and ambitions of the intelligence services mean that no electronic communications device can now be regarded as trustworthy. It’s not only your mobile phone that might betray you: your hard disk could harbour a snake in the grass, too.

No wonder Andy Grove, the former boss of Intel, used to say that “only the paranoid survive” in the technology business. Given that we have become totally dependent on his industry’s products, that knowledge may not provide much consolation. But we now know where we stand. And we have Edward Snowden to thank for that.

Read on

An algorithmic approach to truth?

Apropos our research project’s recent symposium on virality, and in particular the relative speeds of online dissemination of truths and untruths, this paper from Google researchers is interesting. At the moment, Google ranks search results using a proprietary algorithm (or, more likely, set of algorithms) which perform some kind of ‘peer review’ of web pages. The essence of it seems to be that pages that are linked to extensively are ranked more highly than pages with fewer inbound links. This has obvious drawbacks in some cases, particularly when conspiracist thinking is involved. A web page or site which proposes a sensationalist interpretations for a major newsworthy event, for example, may be extensively quoted across the Internet, even though it might be full of misinformation or falsehoods.

The Google researchers have been exploring a method of evaluating web pages on the basis of factual accuracy. “A source that has few false facts is considered to be trustworthy”, they write. “The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases.” They propose a way to compute a “trustworthiness score” – Knowledge-Based Trust (KBT) — using fairly abstruse probabilistic modelling.

The paper reports that they tested the model on a test database and concluded that it enabled them to compute “the true trustworthiness levels of the sources”. They then ran the model on a database of 2.8B facts extracted from the web, and thereby estimated the trustworthiness of 119M webpages. They claim that “manual evaluation of a subset of the results confirms the effectiveness of the method”.

If this finding turns out to be replicable, then it’s an interesting result. The idea that ‘truth’ might be computable will keep philosophers amused an occupied for ages. The idea of a ‘fact’ is itself a contested notion in many fields, because treating something as a fact involves believing a whole set of ‘touchstone theories’. (Believing the reading on a voltmeter, for example, means believing a set of theories which link the movement of the needle on the dial to the underlying electrical phenomenon that is being measured.) And of course the Google approach would not be applicable to many of the pages on the Web, because they don’t make factual assertions or claims. It might, however, be useful in studying online sources which discuss or advocate conspiracy theories.

Even so, it won’t be without its problems. In an interesting article in Friday’s Financial Times, Robert Shrimsley points out that the Google approach is essentially using “fidelity to proved facts as a proxy for trust[worthiness]”. This works fine with single facts, he thinks, but runs into trouble with more complex networks of factual information.

And what about propositions that were originally regarded as ‘facts’ but were later invalidated. “In 1976, “, Shrimsley writes,

“the so-called Birmingham Six were officially guilty of bombings that killed 21 people. Fifteen years later their convictions were quashed and they were officially innocent. This took place in a pre-internet world but campaigns to overturn established truths take time and do not always start on sober, respected news sites. The trust score could make it harder for such campaigns to bubble up.”

And of course we’re still left with the question of what is established truth anyway.