Disupting ‘disruption’

Yesterday’s Observer column.

The Innovator’s Dilemma and the Big Idea that it spawned – disruptive innovation – has been kind to its author. Professor Christensen is widely revered as a guru in the tech world. The idea of disruptive innovation appeals to the vanity of the start-up culture: it conjures up images of high-IQ geeks subverting the empires of men in suits, or at any rate in chinos. Christensen has extended his analysis to other, non-technological areas and industries. Education, for example, is apparently ripe for disruption. And of course companies such as Uber and Airbnb are supposedly bringing innovative disruption to the taxi and hotel industries respectively. Everybody and his dog wants to be in the disruption business.

And then, a few weeks ago, a Harvard historian had the temerity to ask if Emperor Christensen had any clothes. Writing in the New Yorker, Jill Lepore gave The Innovator’s Dilemma the kind of unsympathetic third degree to which historians regularly subject the books of their professional peers. Her conclusion was unflattering, to say the least…

Read on

Facebook, ethics and us, its hapless (and hypocritical?) users

This morning’s Observer column about the Facebook ’emotional contagion’ experiment.

The arguments about whether the experiment was unethical reveal the extent to which big data is changing our regulatory landscape. Many of the activities that large-scale data analytics now make possible are undoubtedly “legal” simply because our laws are so far behind the curve. Our data-protection regimes protect specific types of personal information, but data analytics enables corporations and governments to build up very revealing information “mosaics” about individuals by assembling large numbers of the digital traces that we all leave in cyberspace. And none of those traces has legal protection at the moment.

Besides, the idea that corporations might behave ethically is as absurd as the proposition that cats should respect the rights of small mammals. Cats do what cats do: kill other creatures. Corporations do what corporations do: maximise revenues and shareholder value and stay within the law. Facebook may be on the extreme end of corporate sociopathy, but really it’s just the exception that proves the rule.

danah boyd has a typically insightful blog post about this.

She points out that there are all kinds of undiscussed contradictions in this stuff. Most if not all of the media business (off- and online) involves trying to influence people’s emotions, but we rarely talk about this. But when an online company does it, and explains why, then there’s a row.

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

But…

Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial system is prioritizing some content over others. But let’s compare what we glean from this study with standard practice. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. What do you think garners more traffic – a detailed analysis of what’s happening in Syria or 29 pictures of the cutest members of the animal kingdom? Part of what media learned long ago is that fear and salacious gossip sell papers. 4chan taught us that grotesque imagery and cute kittens work too. What this means online is that stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, favorited, etc. So an entire industry has emerged to produce crappy click bait content under the banner of “news.”

Guess what? When people are surrounded by fear-mongering news media, they get anxious. They fear the wrong things. Moral panics emerge. And yet, we as a society believe that it’s totally acceptable for news media – and its click bait brethren – to manipulate people’s emotions through the headlines they produce and the content they cover. And we generally accept that algorithmic curators are perfectly well within their right to prioritize that heavily clicked content over others, regardless of the psychological toll on individuals or the society. What makes their practice different? (Other than the fact that the media wouldn’t hold itself accountable for its own manipulative practices…)

Somehow, shrugging our shoulders and saying that we promoted content because it was popular is acceptable because those actors don’t voice that their intention is to manipulate your emotions so that you keep viewing their reporting and advertisements. And it’s also acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned. What this suggests is that the practice is acceptable, but admitting the intention and being transparent about the process is not.

Making algorithms responsible for what they do

This morning’s Observer column:

Just over a year ago, after Edward Snowden’s revelations first hit the headlines, I participated in a debate at the Frontline Club with Sir Malcolm Rifkind, the former foreign secretary who is now MP for Kensington and Chelsea and chairman of the intelligence and security committee. Rifkind is a Scottish lawyer straight out of central casting: urbane, witty, courteous and very smart. He’s good on his feet and a master of repartee. He’s the kind of guy you would be happy to have to dinner. His only drawback is that everything he knows about information technology could be written on the back of a postage stamp in 96-point Helvetica bold…

Read on

Wildcat Currency review

My Observer review of Edward Castronova’s book, Wildcat Currency: How the Virtual Money Revolution Is Transforming the Economy.

We think of money as being a factual, straightforward thing. But actually it’s very mysterious. I have a piece of paper before me as I write. Printed on it are some images, lots of hieroglyphics and the words “Twenty Pounds”. If I wave it in front of a shopkeeper, it produces magical effects: in return for it, he gives me a newspaper and other pieces of paper and some bits of metal. But actually my £20 note is just that: a note. A piece of paper. What gives it its magical properties is, Professor Castronova explains, “a social process that enshrines a good as a unique artefact called money; once enshrined, that artefact serves money’s three functions, well or poorly”.

What are these functions? A medium of exchange, a unit of account and a store of value. As it happens, my £20 note fulfils all three functions quite well. But so did cigarettes in prisoner-of-war camps and, in days gone by, the shell of Cypraea moneta, aka the cowrie. For most of recorded history, money took almost as many forms as there were societies, or at any rate rulers, and it’s only in relatively recent times that we have converged on a relatively small number of currencies together with a very small number of super-currencies, chief among them the mighty US dollar and its enfeebled fiscal cousins, the pound sterling and the euro.

Even as this process of monetary consolidation continued, however, strange new kinds of currencies were bubbling up…

Read on

Bletchley Park and the erosion of the freedoms it was set up to defend

This morning’s Observer column.

It’s terrific that Bletchley Park has not only been rescued from the decay into which the site had fallen, but brilliantly restored, thanks to funding from the National Lottery (£5m), Google (which donated £500,000) and the internet security firm McAfee. I’ve been to the Park many times and for years going there was a melancholy experience, as one saw the depredations of time and weather inexorably outpacing the valiant efforts of the squads of volunteers who were trying to keep the place going.

Even at its lowest ebb, Bletchley had a magical aura. One felt something akin to what Abraham Lincoln tried to express when he visited Gettysburg: that something awe-inspiring had transpired here and that it should never be forgotten. The code-breaking that Bletchley Park achieved was an astonishing demonstration of the power of collective intelligence and determination in a quest to defeat the gravest threat that this country had ever faced.

When I was last there, the restoration was almost complete, and I was given a tour on non-disclosure terms, so I had seen what the duchess saw on Wednesday. The most striking bit is the restoration of Hut 6 exactly as it was, complete with all the accoutrements of the tweedy, pipe-smoking genuises who worked in it, right down to the ancient typewriters, bound notebooks and the Yard-O-Led mechanical pencil that one of them possessed.

Hut 6 is significant because that was where Gordon Welchman worked…

Read on

The Internet of Things: it’s a really big deal. Oh yeah?

This morning’s Observer column. From the headline I’m not convinced that the sub-editors spotted the irony.

Like I said, everybody who is anybody in the tech business is very turned on by the IoT. It’s going to make lots of money – oh, and it’ll change the world, too. Of course there are some boring old creeps who keep raining on the parade. Spoilsports, I call them. There are, for example, the “security” experts who think that the IoT opens up horrendous vulnerabilities for our networked society. Hackers in Azerbaijan could get control of our “smart” electricity meters and shut down the whole of East Anglia with the click of a mouse. Pshaw! As if the folks in Azerbaijan even knew there was such a place as East Anglia. Or some guy in Anonymous could remotely jam the accelerator in your car so that you drive into your garage at 130mph even when you have your foot firmly on the brake. As if!

That’s why it’s *sooo* annoying when the media publicise scare stories about security lapses involving connected gadgets. I mean to say, how could TRENDnet have known that its “secure” security webcams weren’t really secure at all? It’s not its fault that a hacker broke into the SecurView camera software and told other people how to do it. The result, according to the US Federal Trade Commission, was that “hackers posted links to the live feeds of nearly 700 of the cameras. The feeds displayed babies asleep in their cribs, young children playing and adults going about their daily lives”.

This is *so* unfair. Poor old TRENDnet makes security *cameras*. Why should it know anything about internet security?

Read on

Can Google really keep our email private?

This morning’s Observer column.

So Google has decided to provide end-to-end encryption for any of its Gmail users who wants it. One could ask “what took you so long?” but that would be churlish. (Some of us were unkind enough to suspect that the reluctance might have been due to, er, commercial considerations: after all, if Gmail messages are properly encrypted, then Google’s computers can’t read the content in order to decide what ads to display alongside them.) But let us be charitable and thankful for small mercies. The code for the service is out for testing and won’t be made freely available until it’s passed the scrutiny of the geek community, but still it’s a significant moment, for which we have Edward Snowden to thank.

The technology that Google will use is public key encryption, and it’s been around for a long time and publicly available ever since 1991, when Phil Zimmermann created PGP (which stands for pretty good privacy)…

Read on

LATER Email from Cory Doctorow:

Wanted to say that I think it’s a misconception that Goog can’t do targeted ads alongside encrypted email. Google knows an awful lot about Gmail users: location, browsing history, clicking history, search history. It can also derive a lot of information about a given email from the metadata: sending, CC list, and subject line. All of that will give them tons of ways to target advertising to Gmail users – — they’re just subtracting one signal from the overall system through which they make their ad-customization calculations.

So the cost of not being evil is even lower than I had supposed!

STILL LATER
This from Business Insider:

Inside the code for Google’s End-to-End email encryption extension for Chrome, there’s a message that should sound very familiar to the NSA: “SSL-added-and-removed-here-;-)”

Followers of this blog will recognise this as quote from a slide leaked by Edward Snowden.

google-cloud-exploitation1383148810

This comes from a slide-deck about the ‘Muscular’ program (who thinks up these daft names?), which allowed Britain’s GCHQ intelligence service and the NSA to pull data directly from Google servers outside of the U.S. The cheeky tone of the slide apparently enraged some Google engineers, which I guess explains why a reference to it resides in the Gmail encryption code.

Cars as services, not possessions?

This morning’s Observer column.

We now know that the implications of the driverless cars’ safety record were not lost on Google either. Last week the company rolled out its latest variation on the autonomous vehicle theme. This is a two-seater, pod-like vehicle which scoots around on small wheels. It looks, in fact, like something out of the Enid Blyton Noddy stories. The promotional video shows a cheery group of baby-boomers summoning these mobile pods using smartphones. The pods whizz up obligingly and stop politely, waiting to be boarded. The folks get in, fasten their seatbelts and look around for steering wheel, gear shift, brake pedals etc.

And then we come to the punchline: none of these things exist on the pod! Instead there are two buttons, one marked “Start” and the other marked “Stop”. There is also a horizontal computer screen which doubtless enables these brave new motorists to conduct Google searches while on the move. The implications are starkly clear: Google has decided that the safest things to do is to eliminate the human driver altogether.

At this point it would be only, er, human to bristle at the temerity of these geeks. Who do they think they are?

Read on

Bitter XPerience

This morning’s Observer column.

It was a clear, windless night. All around was a wonderful panorama crowned by the glorious dome of St Paul’s in the distance. Then I started to look at the tall, glass-walled office blocks in my immediate vicinity. Although it was after 10pm, the lights were on in every building, enabling me to see into hundreds of offices. These offices varied in size and decor, but they all had one thing in common. Somewhere in every one of them was a desk on – or under – which stood a PC.

What then came to mind was the memory of a tousle-haired young entrepreneur named Bill Gates, who once articulated a vision of “a computer on every desk, each one running Microsoft software”. What I was looking at that December night was the realisation of that vision. Every one of the machines I could see was running Microsoft software: a software monoculture, if you like.

Microsoft’s dominance was a testimony to the power of network effects and of technological lock-in. It led to a world in which nobody ever got fired for buying Microsoft products and no software innovation gained traction unless it was designed to run under Windows.

For a time, Microsoft was the winner that took all. It would be churlish to pretend that this was all bad news, because the de facto standardisation that Microsoft brought to personal computer technology enabled the vast expansion of the PC industry and accelerated the adoption of computers in offices and homes.

But accompanying these substantial benefits there were some significant downsides…

Read on