How politics gets hollowed out

March 15th, 2015 [link]

From Brewster Kahle’s blog:

A recent paper from Princeton evaluated of over 1700 federal government policy decisions made in the last 30 years and found “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.” Therefore, according to this research, the vast majority of the population has little or no say in how the federal government makes policy decisions. Similarly discouraging is the economic analysis over the last 30 years that found that the increase in American wealth went to only the wealthiest 1% of the population, with the other 99% essentially staying even. Therefore, there has not been equal opportunity for economic success in the United States for the vast majority of the population.

Waiting for punters

March 13th, 2015 [link]


Foster’s Liner

March 12th, 2015 [link]


Norman Foster’s Law Faculty, looming out of the dark like a huge ocean liner. Reminded me of the liner scene in Fellini’s Amercord.

Getting to bedrock

March 8th, 2015 [link]

This morning’s Observer column:

The implication of these latest revelations is stark: the capabilities and ambitions of the intelligence services mean that no electronic communications device can now be regarded as trustworthy. It’s not only your mobile phone that might betray you: your hard disk could harbour a snake in the grass, too.

No wonder Andy Grove, the former boss of Intel, used to say that “only the paranoid survive” in the technology business. Given that we have become totally dependent on his industry’s products, that knowledge may not provide much consolation. But we now know where we stand. And we have Edward Snowden to thank for that.

Read on

An algorithmic approach to truth?

March 7th, 2015 [link]

Apropos our research project’s recent symposium on virality, and in particular the relative speeds of online dissemination of truths and untruths, this paper from Google researchers is interesting. At the moment, Google ranks search results using a proprietary algorithm (or, more likely, set of algorithms) which perform some kind of ‘peer review’ of web pages. The essence of it seems to be that pages that are linked to extensively are ranked more highly than pages with fewer inbound links. This has obvious drawbacks in some cases, particularly when conspiracist thinking is involved. A web page or site which proposes a sensationalist interpretations for a major newsworthy event, for example, may be extensively quoted across the Internet, even though it might be full of misinformation or falsehoods.

The Google researchers have been exploring a method of evaluating web pages on the basis of factual accuracy. “A source that has few false facts is considered to be trustworthy”, they write. “The facts are automatically extracted from each source by information extraction methods commonly used to construct knowledge bases.” They propose a way to compute a “trustworthiness score” – Knowledge-Based Trust (KBT) — using fairly abstruse probabilistic modelling.

The paper reports that they tested the model on a test database and concluded that it enabled them to compute “the true trustworthiness levels of the sources”. They then ran the model on a database of 2.8B facts extracted from the web, and thereby estimated the trustworthiness of 119M webpages. They claim that “manual evaluation of a subset of the results confirms the effectiveness of the method”.

If this finding turns out to be replicable, then it’s an interesting result. The idea that ‘truth’ might be computable will keep philosophers amused an occupied for ages. The idea of a ‘fact’ is itself a contested notion in many fields, because treating something as a fact involves believing a whole set of ‘touchstone theories’. (Believing the reading on a voltmeter, for example, means believing a set of theories which link the movement of the needle on the dial to the underlying electrical phenomenon that is being measured.) And of course the Google approach would not be applicable to many of the pages on the Web, because they don’t make factual assertions or claims. It might, however, be useful in studying online sources which discuss or advocate conspiracy theories.

Even so, it won’t be without its problems. In an interesting article in Friday’s Financial Times, Robert Shrimsley points out that the Google approach is essentially using “fidelity to proved facts as a proxy for trust[worthiness]“. This works fine with single facts, he thinks, but runs into trouble with more complex networks of factual information.

And what about propositions that were originally regarded as ‘facts’ but were later invalidated. “In 1976, “, Shrimsley writes,

“the so-called Birmingham Six were officially guilty of bombings that killed 21 people. Fifteen years later their convictions were quashed and they were officially innocent. This took place in a pre-internet world but campaigns to overturn established truths take time and do not always start on sober, respected news sites. The trust score could make it harder for such campaigns to bubble up.”

And of course we’re still left with the question of what is established truth anyway.

Technology and the election

March 5th, 2015 [link]

My colleague David Runciman — who is Professor of Politics in Cambridge — had the great idea of doing a weekly podcast from now until the UK has a new government with the aim of holding different kinds of discussions than are possible on mainstream media in the run-up to an election. This week he and I had a long conversation about: whether Facebook could conceivably influence the outcome; about why the current campaign seems so dated (because it seems still to be entirely focussed on ‘old’ media); on why surveillance doesn’t figure as an issue in the campaign; on whether UKIP could be regarded as disruptive in the way that Uber is; and on lots of other stuff.

Spring… honestly!

March 2nd, 2015 [link]


In the garden, this morning.

Straw and Rifkind had nothing to hide, but…

March 1st, 2015 [link]

This morning’s Observer column:

The really sinister thing about the nothing-to-hide argument is its underlying assumption that privacy is really about hiding bad things. As the computer-security guru Bruce Schneier once observed, the nothing-to-hide mantra stems from “a faulty premise that privacy is about hiding a wrong”. But surveillance can have a chilling effect by inhibiting perfectly lawful activities (lawful in democracies anyway) such as free speech, anonymous reading and having confidential conversations.

So the long-term message for citizens of democracies is: if you don’t want to be a potential object of attention by the authorities, then make sure you don’t do anything that might make them – or their algorithms – want to take a second look at you. Like encrypting your email, for example; or using Tor for anonymous browsing. Which essentially means that only people who don’t want to question or oppose those in power are the ones who should be entirely relaxed about surveillance.

We need to reboot the discourse about democracy and surveillance. And we should start by jettisoning the cant about nothing-to-hide. The truth is that we all have things to hide – perfectly legitimately. Just as our disgraced former foreign secretaries had.

Read on

ISC Chairman had “nothing to hide” but still got into trouble

February 25th, 2015 [link]

So Sir Malcolm Rifkind has fallen on his sword after a journalistic sting operation recorded him apparently touting for work from a fake Chinese company that was supposedly wanting him to join its advisory board. The other former Foreign Secretary, Jack Straw, was similarly embarrassed after he was surreptitiously recorded bragging about the access that his status as a former senior minister granted him. Both men protested vigorously that they had done nothing wrong, which may well be true, at least in the sense that they were adhering to the letter of the rules for public representatives.

What’s interesting about Rifkind’s fall is that he used to be an exponent of the standard mantra — “if you have nothing to hide then you have nothing to fear” from bulk surveillance. Both men claim that they had done nothing wrong, but at the same time it’s clear that they have been grievously embarrassed by public exposure of activities that they wanted to keep private. In that sense, they are in the same boat as most citizens. We all do harmless things that we nevertheless regard as private matters which are none of the government’s business. That’s what privacy is all about.

Thinking of Googling for health information? Think again.

February 24th, 2015 [link]

Interesting video by Tim Libert, summarising the results of some research he did on the way health information sites (including those run by government agencies) covertly pass information about health-related searches to a host of commercial companies. Libert is a researcher at the University of Pennsylvania. He built a program called webXray to analyze the top 50 search results for nearly 2,000 common diseases (over 80,000 pages total). He found that no fewer that 91% of the pages made third-party requests to outside companies. So if you search for “cold sores,” for instance, and click the WebMD “Cold Sores Topic Overview” link, the site is passing your request for information about the disease along to “one or more (and often many, many more) other corporations”.

According to Libert’s research (Communications of the ACM, Vol. 58 No. 3, Pages 68-77), about 70% of the time, the data transmitted “contained information exposing specific conditions, treatments, and diseases.”

So think twice before consulting Dr Google. Especially if you think you might have a condition that might affect your insurance rating.