Wildcat Currency review

My Observer review of Edward Castronova’s book, Wildcat Currency: How the Virtual Money Revolution Is Transforming the Economy.

We think of money as being a factual, straightforward thing. But actually it’s very mysterious. I have a piece of paper before me as I write. Printed on it are some images, lots of hieroglyphics and the words “Twenty Pounds”. If I wave it in front of a shopkeeper, it produces magical effects: in return for it, he gives me a newspaper and other pieces of paper and some bits of metal. But actually my £20 note is just that: a note. A piece of paper. What gives it its magical properties is, Professor Castronova explains, “a social process that enshrines a good as a unique artefact called money; once enshrined, that artefact serves money’s three functions, well or poorly”.

What are these functions? A medium of exchange, a unit of account and a store of value. As it happens, my £20 note fulfils all three functions quite well. But so did cigarettes in prisoner-of-war camps and, in days gone by, the shell of Cypraea moneta, aka the cowrie. For most of recorded history, money took almost as many forms as there were societies, or at any rate rulers, and it’s only in relatively recent times that we have converged on a relatively small number of currencies together with a very small number of super-currencies, chief among them the mighty US dollar and its enfeebled fiscal cousins, the pound sterling and the euro.

Even as this process of monetary consolidation continued, however, strange new kinds of currencies were bubbling up…

Read on

Bletchley Park and the erosion of the freedoms it was set up to defend

This morning’s Observer column.

It’s terrific that Bletchley Park has not only been rescued from the decay into which the site had fallen, but brilliantly restored, thanks to funding from the National Lottery (£5m), Google (which donated £500,000) and the internet security firm McAfee. I’ve been to the Park many times and for years going there was a melancholy experience, as one saw the depredations of time and weather inexorably outpacing the valiant efforts of the squads of volunteers who were trying to keep the place going.

Even at its lowest ebb, Bletchley had a magical aura. One felt something akin to what Abraham Lincoln tried to express when he visited Gettysburg: that something awe-inspiring had transpired here and that it should never be forgotten. The code-breaking that Bletchley Park achieved was an astonishing demonstration of the power of collective intelligence and determination in a quest to defeat the gravest threat that this country had ever faced.

When I was last there, the restoration was almost complete, and I was given a tour on non-disclosure terms, so I had seen what the duchess saw on Wednesday. The most striking bit is the restoration of Hut 6 exactly as it was, complete with all the accoutrements of the tweedy, pipe-smoking genuises who worked in it, right down to the ancient typewriters, bound notebooks and the Yard-O-Led mechanical pencil that one of them possessed.

Hut 6 is significant because that was where Gordon Welchman worked…

Read on

The Internet of Things: it’s a really big deal. Oh yeah?

This morning’s Observer column. From the headline I’m not convinced that the sub-editors spotted the irony.

Like I said, everybody who is anybody in the tech business is very turned on by the IoT. It’s going to make lots of money – oh, and it’ll change the world, too. Of course there are some boring old creeps who keep raining on the parade. Spoilsports, I call them. There are, for example, the “security” experts who think that the IoT opens up horrendous vulnerabilities for our networked society. Hackers in Azerbaijan could get control of our “smart” electricity meters and shut down the whole of East Anglia with the click of a mouse. Pshaw! As if the folks in Azerbaijan even knew there was such a place as East Anglia. Or some guy in Anonymous could remotely jam the accelerator in your car so that you drive into your garage at 130mph even when you have your foot firmly on the brake. As if!

That’s why it’s *sooo* annoying when the media publicise scare stories about security lapses involving connected gadgets. I mean to say, how could TRENDnet have known that its “secure” security webcams weren’t really secure at all? It’s not its fault that a hacker broke into the SecurView camera software and told other people how to do it. The result, according to the US Federal Trade Commission, was that “hackers posted links to the live feeds of nearly 700 of the cameras. The feeds displayed babies asleep in their cribs, young children playing and adults going about their daily lives”.

This is *so* unfair. Poor old TRENDnet makes security *cameras*. Why should it know anything about internet security?

Read on

Can Google really keep our email private?

This morning’s Observer column.

So Google has decided to provide end-to-end encryption for any of its Gmail users who wants it. One could ask “what took you so long?” but that would be churlish. (Some of us were unkind enough to suspect that the reluctance might have been due to, er, commercial considerations: after all, if Gmail messages are properly encrypted, then Google’s computers can’t read the content in order to decide what ads to display alongside them.) But let us be charitable and thankful for small mercies. The code for the service is out for testing and won’t be made freely available until it’s passed the scrutiny of the geek community, but still it’s a significant moment, for which we have Edward Snowden to thank.

The technology that Google will use is public key encryption, and it’s been around for a long time and publicly available ever since 1991, when Phil Zimmermann created PGP (which stands for pretty good privacy)…

Read on

LATER Email from Cory Doctorow:

Wanted to say that I think it’s a misconception that Goog can’t do targeted ads alongside encrypted email. Google knows an awful lot about Gmail users: location, browsing history, clicking history, search history. It can also derive a lot of information about a given email from the metadata: sending, CC list, and subject line. All of that will give them tons of ways to target advertising to Gmail users – — they’re just subtracting one signal from the overall system through which they make their ad-customization calculations.

So the cost of not being evil is even lower than I had supposed!

STILL LATER
This from Business Insider:

Inside the code for Google’s End-to-End email encryption extension for Chrome, there’s a message that should sound very familiar to the NSA: “SSL-added-and-removed-here-;-)”

Followers of this blog will recognise this as quote from a slide leaked by Edward Snowden.

google-cloud-exploitation1383148810

This comes from a slide-deck about the ‘Muscular’ program (who thinks up these daft names?), which allowed Britain’s GCHQ intelligence service and the NSA to pull data directly from Google servers outside of the U.S. The cheeky tone of the slide apparently enraged some Google engineers, which I guess explains why a reference to it resides in the Gmail encryption code.

Cars as services, not possessions?

This morning’s Observer column.

We now know that the implications of the driverless cars’ safety record were not lost on Google either. Last week the company rolled out its latest variation on the autonomous vehicle theme. This is a two-seater, pod-like vehicle which scoots around on small wheels. It looks, in fact, like something out of the Enid Blyton Noddy stories. The promotional video shows a cheery group of baby-boomers summoning these mobile pods using smartphones. The pods whizz up obligingly and stop politely, waiting to be boarded. The folks get in, fasten their seatbelts and look around for steering wheel, gear shift, brake pedals etc.

And then we come to the punchline: none of these things exist on the pod! Instead there are two buttons, one marked “Start” and the other marked “Stop”. There is also a horizontal computer screen which doubtless enables these brave new motorists to conduct Google searches while on the move. The implications are starkly clear: Google has decided that the safest things to do is to eliminate the human driver altogether.

At this point it would be only, er, human to bristle at the temerity of these geeks. Who do they think they are?

Read on

Bitter XPerience

This morning’s Observer column.

It was a clear, windless night. All around was a wonderful panorama crowned by the glorious dome of St Paul’s in the distance. Then I started to look at the tall, glass-walled office blocks in my immediate vicinity. Although it was after 10pm, the lights were on in every building, enabling me to see into hundreds of offices. These offices varied in size and decor, but they all had one thing in common. Somewhere in every one of them was a desk on – or under – which stood a PC.

What then came to mind was the memory of a tousle-haired young entrepreneur named Bill Gates, who once articulated a vision of “a computer on every desk, each one running Microsoft software”. What I was looking at that December night was the realisation of that vision. Every one of the machines I could see was running Microsoft software: a software monoculture, if you like.

Microsoft’s dominance was a testimony to the power of network effects and of technological lock-in. It led to a world in which nobody ever got fired for buying Microsoft products and no software innovation gained traction unless it was designed to run under Windows.

For a time, Microsoft was the winner that took all. It would be churlish to pretend that this was all bad news, because the de facto standardisation that Microsoft brought to personal computer technology enabled the vast expansion of the PC industry and accelerated the adoption of computers in offices and homes.

But accompanying these substantial benefits there were some significant downsides…

Read on

Google privacy ruling: the thin end of a censorship wedge?

This morning’s Observer column.

Sooner or later, every argument about regulation of the internet comes down to the same question: is this the thin end of the wedge or not? We saw a dramatic illustration last week when the European court of justice handed down a judgment on a case involving a Spanish lawyer, one Mario Costeja González, who objected that entering his name in Google’s search engine brought up embarrassing information about his past (that one of his properties had been the subject of a repossession)…

Read on

LATER

Three interesting — and usefully diverse — angles on the ECJ decision.

  • Daithi Mac Sitigh points out that the decision highlights the tensions between EU and US law. “This is particularly significant”, he says, “given that most of the major global players in social networking and e-commerce operate out of the US but also do a huge amount of business in Europe.”

Google’s first line of defence was that its activities were not subject to the Data Protection Directive. It argued that its search engine was not a business carried out within the European Union. Google Spain was clearly subject to EU law, but Google argued that it sells advertising rather than running a search engine.

The court was asked to consider whether Google might be subject to the Directive under various circumstances. A possible link was the use of equipment in the EU, through gathering information from EU-based web servers or using relevant domain names (such as google.es). Another suggestion was that a case should be brought at its “centre of gravity”, taking into account where the people making the requests to delete data have their interests.

But the court never reached these points. Instead, it found the overseas-based search engine and the Spain-based seller of advertising were “inextricably linked”. As such, Google was found to be established in Spain and subject to the directive.

The message being sent was an important one. Although this ruling is specific to the field of data protection, it suggests that if you want to do business in the EU, a corporate structure that purports to shield your activities from EU law will not necessarily protect you from having to comply with local legislation. This may explain the panicked tone of some of the reaction to the decision.

  • In an extraordinary piece, “Right to Forget a Genocide”, Zeynep Tufekci muses about how (Belgian) colonial imposition of ID cards on Rwandan citizens was instrumental in facilitating genocide.

It may seem like an extreme jump, from drunken adolescent photos to genocide and ethnic cleansing, but the shape, and filters, of a society’s memory is always more than just about individual embarrassment or advancement. What we know about people, and how easily we can identify or classify them, is consequential far beyond jobs and dates, and in some contexts may make the difference between life and death.

“Practical obscurity”—the legal term for information that was available, but not easily—has died in most rich countries within just about a decade. Court records and criminal histories, which were only accessible to the highly-motivated, are now there at the click of a mouse. Further, what is “less obscure” has greatly expanded: using our online data, algorithms can identify information about a person, such as sexual orientation and political affiliation, even if that person never disclosed them.

In that context, take Rwanda, a country many think about in conjunction with the horrific genocide 20 years ago during which more than 800,000 people were killed—in just about one hundred days. Often, stories of ethnic cleansing and genocide get told in a context of “ancient hatreds,” but the truth of it is often much uglier, and much less ancient. It was the brutal colonizer of Rwanda, Belgium, that imposed strict ethnicity-based divisions in a place where identity tended to be more fluid and mixed. Worse, it imposed a national ID system that identified each person as belonging to Hutu, Tutsi or Twa and forever freezing them in that place. [For a detailed history of the construction of identity in Rwanda read this book, and for the conduct of colonial Belgium, Rwanda’s colonizer, read this one.]

Few years before the genocide, some NGOs had urged that Rwanda “forget” ethnicity, erasing them from ID cards.

They were not listened to.

During the genocide, it was those ID cards that were asked for at each checkpoint, and it was those ID cards that identified the Tutsis, most of whom were slaughtered on the spot. The ID cards closed off any avenue of “passing” a checkpoint. Ethnicity, a concept that did not at all fit neatly into the region’s complex identity configuration, became the deadly division that underlined one of the 20th century’s worst moments. The ID cards doomed and fueled the combustion of mass murder.

  • Finally, there’s a piece in Wired by Julia Powles arguing that “The immediate reaction to the decision has been, on the whole, negative. At best, it is reckoned to be hopelessly unworkable. At worst, critics pan it as censorship. While there is much to deplore, I would argue that there are some important things we can gain from this decision before casting it roughly aside.”

What this case should ideally provoke is an unflinching reflection on our contemporary digital reality of walled gardens, commercial truth engines, and silent stewards of censorship. The CJEU is painfully aware of the impact of search engines (and ‘The’ search engine, in particular). But we as a society should think about the hard sociopolitical problems that they pose. Search engines are catalogues, or maps, of human knowledge, sentiments, joys, sorrows, and venom. Silently, with economic drivers and unofficial sanction, they shape our lives and our interactions.

The fact of the matter here is that if there is anyone that is up to the challenge of respecting this ruling creatively, Google is. But if early indications are anything to go by, there’s a danger that we’ll unwittingly save Google from having to do so, either through rejecting the decision in practical or legal terms; through allowing Google to retreat “within the framework of their responsibilities, powers and capabilities” (which could have other unwanted effects and unchecked power, by contrast with transparent legal mechanisms); or through working the “right to be forgotten” out of law through the revised Data Protection Regulation, all under the appealing but ultimately misguided banner of preventing censorship.

There is, Powles argues, a possible technical fix for this — implementation of a ‘right to reply’ in search engine results.

An all-round better solution than “forgetting”, “erasure”, or “take-down”, with all of the attendant issues with free speech and the rights of other internet users, is a “right to reply” within the notion of “rectification”. This would be a tech-enabled solution: a capacity to associate metadata, perhaps in the form of another link, to any data that is inaccurate, out of date, or incomplete, so that the individual concerned can tell the “other side” of the story.

We have the technology to implement such solutions right now. In fact, we’ve done a mock-up envisaging how such an approach could be implemented.

Search results could be tagged to indicate that a reply has been lodged, much as we see with sponsored content on social media platforms. Something like this, for example:

Forgotten

(Thanks to Charles Arthur for the Tufekci and Powles links.)

Our Kafkaesque world

This morning’s Observer column.

When searching for an adjective to describe our comprehensively surveilled networked world – the one bookmarked by the NSA at one end and by Google, Facebook, Yahoo and co at the other – “Orwellian” is the word that people generally reach for.

But “Kafkaesque” seems more appropriate. The term is conventionally defined as “having a nightmarishly complex, bizarre, or illogical quality”, but Frederick Karl, Franz Kafka’s most assiduous biographer, regarded that as missing the point. “What’s Kafkaesque,” he once told the New York Times, “is when you enter a surreal world in which all your control patterns, all your plans, the whole way in which you have configured your own behaviour, begins to fall to pieces, when you find yourself against a force that does not lend itself to the way you perceive the world.”

A vivid description of this was provided recently by Janet Vertesi, a sociologist at Princeton University. She gave a talk at a conference describing her experience of trying to keep her pregnancy secret from marketers…

Read on

Metcalfe’s Law Rules OK

This morning’s Observer column:

There are two paradoxical things about Twitter. The first is how so many people apparently can’t get their heads around what seems like a blindingly simple idea – free expression, 140 characters at a time. I long ago lost count of the number of people who would come up to me on social occasions saying that they just couldn’t see the point of Twitter. Why would anyone be interested in knowing what they had for breakfast? I would patiently explain that while some twitterers might indeed be broadcasting details of their eating habits, the significance of the medium was that it enabled one to tap into the “thought-stream” of interesting individuals. The key to it, in other words, lay in choosing whom to “follow”. In that way, Twitter functions as a human-mediated RSS feed which is why, IMHO, it continues to be one of the most useful services available on the internet.

The second paradox about Twitter is how a service that has become ubiquitous – and enjoys nearly 100% name recognition, at least in industrialised countries – could become the stuff of analysts’ nightmares because they fear it lacks a business model that will one day produce the revenues to justify investors’ hopes for it.

They may be right about the business model – in which case Twitter becomes a perfect case study in the economics of information goods. The key to success in cyberspace is to harness the power of Metcalfe’s Law, which says that the value of a network is proportional to the square of the number of its users…

Read on