So what kind of time will you get from the iWatch?

This morning’s Observer column:

A few months ago I bought a “smartwatch”. I did so because there was increasing media hype about these devices and I don’t write about kit that I haven’t owned and used in anger. The model I chose was a Pebble Steel, for several reasons: it was originally funded by a Kickstarter campaign; a geek friend already had one; and, well, it looked interesting. Now, several months on, I am back to wearing my old analogue watch. The Pebble experiment turned out to be instructive. The watch was well made and well presented. It had reasonable battery life and the software was easy to install on my iPhone. The bluetooth link was reliable. Its timekeeping was accurate, and it could display the time in a variety of ways, some of them humorous. One could download a variety of virtual watch-faces, and so on.

So why is it not still on my wrist? Well, basically most of its “features” were of little or no actual use to me; and for much of the time, even apps that I would have found useful – such as having the watch vibrate when a text message arrived – turned out to be flaky: sometimes they worked; more often they didn’t. Which of course led to the thought that if anybody can make the smartwatch into a successful consumer product that “just works” it would be Apple. And indeed it was amusing to note how many people who, upon seeing the Pebble on my wrist, would ask me: “Is that the new Apple Watch?”

Well, now the Apple Watch is here and we will find out if the world really was waiting for a proper smartwatch to arrive…

Read on

How politics gets hollowed out

From Brewster Kahle’s blog:

A recent paper from Princeton evaluated of over 1700 federal government policy decisions made in the last 30 years and found “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.” Therefore, according to this research, the vast majority of the population has little or no say in how the federal government makes policy decisions. Similarly discouraging is the economic analysis over the last 30 years that found that the increase in American wealth went to only the wealthiest 1% of the population, with the other 99% essentially staying even. Therefore, there has not been equal opportunity for economic success in the United States for the vast majority of the population.

Getting to bedrock

This morning’s Observer column:

The implication of these latest revelations is stark: the capabilities and ambitions of the intelligence services mean that no electronic communications device can now be regarded as trustworthy. It’s not only your mobile phone that might betray you: your hard disk could harbour a snake in the grass, too.

No wonder Andy Grove, the former boss of Intel, used to say that “only the paranoid survive” in the technology business. Given that we have become totally dependent on his industry’s products, that knowledge may not provide much consolation. But we now know where we stand. And we have Edward Snowden to thank for that.

Read on

An algorithmic approach to truth?

Apropos our research project’s recent symposium on virality, and in particular the relative speeds of online dissemination of truths and untruths, this paper from Google researchers is interesting. At the moment, Google ranks search results using a proprietary algorithm (or, more likely, set of algorithms) which perform some kind of ‘peer review’ of web pages. The essence of it seems to be that pages that are linked to extensively are ranked more highly than pages with fewer inbound links. This has obvious drawbacks in some cases, particularly when conspiracist thinking is involved. A web page or site which proposes a sensationalist interpretations for a major newsworthy event, for example, may be extensively quoted across the Internet, even though it might be full of misinformation or falsehoods.

The Google researchers have been exploring a method of evaluating web pages on the basis of factual accuracy. “A source that has few false facts is considered to be trustworthy”, they write. “The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases.” They propose a way to compute a “trustworthiness score” – Knowledge-Based Trust (KBT) — using fairly abstruse probabilistic modelling.

The paper reports that they tested the model on a test database and concluded that it enabled them to compute “the true trustworthiness levels of the sources”. They then ran the model on a database of 2.8B facts extracted from the web, and thereby estimated the trustworthiness of 119M webpages. They claim that “manual evaluation of a subset of the results confirms the effectiveness of the method”.

If this finding turns out to be replicable, then it’s an interesting result. The idea that ‘truth’ might be computable will keep philosophers amused an occupied for ages. The idea of a ‘fact’ is itself a contested notion in many fields, because treating something as a fact involves believing a whole set of ‘touchstone theories’. (Believing the reading on a voltmeter, for example, means believing a set of theories which link the movement of the needle on the dial to the underlying electrical phenomenon that is being measured.) And of course the Google approach would not be applicable to many of the pages on the Web, because they don’t make factual assertions or claims. It might, however, be useful in studying online sources which discuss or advocate conspiracy theories.

Even so, it won’t be without its problems. In an interesting article in Friday’s Financial Times, Robert Shrimsley points out that the Google approach is essentially using “fidelity to proved facts as a proxy for trust[worthiness]”. This works fine with single facts, he thinks, but runs into trouble with more complex networks of factual information.

And what about propositions that were originally regarded as ‘facts’ but were later invalidated. “In 1976, “, Shrimsley writes,

“the so-called Birmingham Six were officially guilty of bombings that killed 21 people. Fifteen years later their convictions were quashed and they were officially innocent. This took place in a pre-internet world but campaigns to overturn established truths take time and do not always start on sober, respected news sites. The trust score could make it harder for such campaigns to bubble up.”

And of course we’re still left with the question of what is established truth anyway.

Technology and the election

My colleague David Runciman — who is Professor of Politics in Cambridge — had the great idea of doing a weekly podcast from now until the UK has a new government with the aim of holding different kinds of discussions than are possible on mainstream media in the run-up to an election. This week he and I had a long conversation about: whether Facebook could conceivably influence the outcome; about why the current campaign seems so dated (because it seems still to be entirely focussed on ‘old’ media); on why surveillance doesn’t figure as an issue in the campaign; on whether UKIP could be regarded as disruptive in the way that Uber is; and on lots of other stuff.

Straw and Rifkind had nothing to hide, but…

This morning’s Observer column:

The really sinister thing about the nothing-to-hide argument is its underlying assumption that privacy is really about hiding bad things. As the computer-security guru Bruce Schneier once observed, the nothing-to-hide mantra stems from “a faulty premise that privacy is about hiding a wrong”. But surveillance can have a chilling effect by inhibiting perfectly lawful activities (lawful in democracies anyway) such as free speech, anonymous reading and having confidential conversations.

So the long-term message for citizens of democracies is: if you don’t want to be a potential object of attention by the authorities, then make sure you don’t do anything that might make them – or their algorithms – want to take a second look at you. Like encrypting your email, for example; or using Tor for anonymous browsing. Which essentially means that only people who don’t want to question or oppose those in power are the ones who should be entirely relaxed about surveillance.

We need to reboot the discourse about democracy and surveillance. And we should start by jettisoning the cant about nothing-to-hide. The truth is that we all have things to hide – perfectly legitimately. Just as our disgraced former foreign secretaries had.

Read on

ISC Chairman had “nothing to hide” but still got into trouble

So Sir Malcolm Rifkind has fallen on his sword after a journalistic sting operation recorded him apparently touting for work from a fake Chinese company that was supposedly wanting him to join its advisory board. The other former Foreign Secretary, Jack Straw, was similarly embarrassed after he was surreptitiously recorded bragging about the access that his status as a former senior minister granted him. Both men protested vigorously that they had done nothing wrong, which may well be true, at least in the sense that they were adhering to the letter of the rules for public representatives.

What’s interesting about Rifkind’s fall is that he used to be an exponent of the standard mantra — “if you have nothing to hide then you have nothing to fear” from bulk surveillance. Both men claim that they had done nothing wrong, but at the same time it’s clear that they have been grievously embarrassed by public exposure of activities that they wanted to keep private. In that sense, they are in the same boat as most citizens. We all do harmless things that we nevertheless regard as private matters which are none of the government’s business. That’s what privacy is all about.