They’re Democrats, Hilary, but not as you know them…

This morning’s Observer column:

Good news for Hillary Clinton: there are very few Republican voters in Silicon Valley. Bad news: the Democrats there are not Democrats as you know them. They detest trade unions, for example, and they’re very keen on immigration – so long as the immigrants have PhDs from elite Indian or Chinese universities. They are in favour of government, so long as it’s “smart” government. And they believe that all change is good – especially in the long term.

We know this courtesy of a fascinating piece of opinion polling by Gregory Ferenstein, the guy who runs TechCrunch’s policy channel…

Read on

Remembering the first ‘Killer App’

This morning’s Observer column:

Tidying my office the other day, as one does at this time of year, I came upon a shabby, brown, dust-covered, A5 plastic ring binder. It was the kind of thing one throws into a skip without a moment’s hesitation. Except this wasn’t something to throw away, for embossed on the spine of the binder was “VisiCalc”. Inside was a 5.25in floppy disc and a glossy manual. And as I stood there looking at it I had one of those epiphanies that James Joyce was so keen on. I was suddenly transported back to late November 1979. I had bought an Apple II computer on a research grant – the more expensive 32k model, which had an external disk drive. An academic colleague who was on sabbatical at MIT had sent me a postcard saying that he had seen an Apple II running some weird software for business planning that was driving people wild. So I asked him to get me a copy and it arrived via FedEx.

VisiCalc was the world’s first spreadsheet program. It was written by Dan Bricklin and Bob Frankston and came from an insight Bricklin had one day while attending Harvard Business School….

Read on

The new sun in the tech universe

This morning’s Observer column:

The Christmas holidays are the time of year when different generations of the family gather around the dinner table. So it’s a perfect opportunity for a spot of tech anthropology. Here’s how to do it.

At some point, insert into the conversation a contemporary topic about which most people have strong opinions but know relatively little. Jeremy Clarkson, say. There will come a moment when someone decides that the only thing to be done to resolve the ensuing factual disputes is to “Google it”. Watch what happens next…

Read on

The (non)sharing economy

This morning’s Observer column:

A euphemism is a polite way of obscuring an uncomfortable or unpleasant truth. So pornography becomes adult entertainment, genocide becomes ethnic cleansing, sacking someone becomes letting him (or her) go. People pass away rather than die; toilets become rest rooms; CCTV cameras monitor public and private spaces for our comfort and safety; and shell shock evolved into battle fatigue before finally winding up as post-traumatic stress disorder – which is really a way of disguising the awkward fact that killing people in cold blood can do very bad things to your psyche.

The tech industry is also addicted to euphemism. Thus the ubiquitous, unfair, grotesquely unbalanced contract which gives an internet corporation all the rights and the user almost none is called an end-user licence agreement. Computers that have been illegally hacked are “pwned”. The wholesale hoovering-up of personal data by internet companies (and the state) is never called by its real name, which is surveillance. And so on.

But the word that is most subverted by the tech industry is share…

Read on

Hunting the Qubit

Today’s Observer column:

We live in a universe that virtually none of us will ever understand. Physicists, however, refuse to be daunted by this and have been casting about for ways of putting these quantum properties to practical use. In the process, their gaze alighted on that fundamental building block of digital technology, the humble binary digit (or “bit”) in which all digital information is encoded. In the Newtonian (ie non-quantum) world, a bit can take only one of two values – one or zero. But at the quantum level, superposition means that a quantum bit – a qubit – could have multiple values (one, zero and a superposition of one and zero) at the same time. Which means that a computer based on quantum principles would be much, much faster than a conventional, silicon-based one. Various outfits have been trying to build one.

The results are controversial but intriguing…

Read on

Algorithmic power and programmers’ ethics

We know that power corrupts. But what about algorithmic power? This morning’s Observer column:

There is a direction of travel here – one that is taking us towards what an American legal scholar, Frank Pasquale, has christened the “black box society”. You might think that the subtitle – “the secret algorithms that control money and information” – says it all, except that it’s not just about money and information but increasingly about most aspects of contemporary life, at least in industrialised countries. For example, we know that Facebook algorithms can influence the moods and the voting behaviour of the service’s users. And we also know that Google’s search algorithms can effectively render people invisible. In some US cities, algorithms determine whether you are likely to be stopped and searched in the street. For the most part, it’s an algorithm that decides whether a bank will seriously consider your application for a mortgage or a loan. And the chances are that it’s a machine-learning or network-analysis algorithm that flags internet or smartphone users as being worthy of further examination. Uber drivers may think that they are working for themselves, but in reality they are managed by an algorithm. And so on.

Without us noticing it, therefore, a new kind of power – algorithmic power – has arrived in our societies. And for most citizens, these algorithms are black boxes – their inner logic is opaque to us. But they have values and priorities embedded in them, and those values are likewise opaque to us: we cannot interrogate them.

This poses two questions. First of all, who has legal responsibility for the decisions made by algorithms? The company that runs the services that are enabled by them? Maybe – depending on how smart their lawyers are.

But what about the programmers who wrote the code? Don’t they also have some responsibilities? Pasquale reports that some micro-targeting algorithms (the programs that decide what is shown in your browser screen, such as advertising) categorise web users into categories which include “probably bipolar”, “daughter killed in car crash”, “rape victim”, and “gullible elderly”. A programmer wrote that code. Did he (for it was almost certainly a male) not have some ethical qualms about his handiwork?

Read on

‘Smart’ homes? Not yet

My Observer comment piece on what the Internet of Things looks like when it’s at home:

There is a technological juggernaut heading our way. It’s called the Internet of Things (IoT). For the tech industry, it’s the Next Big Thing, alongside big data, though in fact that pair are often just two sides of the same coin. The basic idea is that since computing devices are getting smaller and cheaper, and wireless network technology is becoming ubiquitous, it will soon be feasible to have trillions of tiny, networked computers embedded in everything. They can sense changes, turning things on and off, making decisions about whether to open a door or close a valve or order fresh supplies of milk, you name it, the computers communicating with one another and shipping data to server farms all over the place.

As ever with digital technology, there’s an underlying rationality to lots of this. The IoT could make our lives easier and our societies more efficient. If parking bays could signal to nearby cars that they are empty, then the nightmarish task of finding a parking place in crowded cities would be eased. If every river in the UK could tweet its level every few minutes, then we could have advance warning of downstream floods in time to alert those living in their paths. And so on.

But that kind of networking infrastructure takes time to build, so the IoT boys (and they are mostly boys, still) have set their sights closer to home, which is why we are beginning to hear a lot about “smart” homes. On further examination, this turns out mostly to mean houses stuffed with networked kit…

Read on

In the bleak midwinter, droning on

This morning’s Observer column:

Well, Black Friday has come and gone and this columnist has missed the boat – again. But if the marketing mythology is to be believed, countless millions of our better-organised fellow citizens have been dutifully clicking and purchasing.

This year, however, is slightly different because something new will have appeared on the wishlists of tech-savvy shoppers: drones. A quick search for them on Amazon.co.uk brought up 46 different models before I got tired of scrolling, ranging in price from under £20 to over £1,200. And over at the Apple store, they’re selling the Parrot AR.Drone 2.0 Power Edition Quadricopter, a snip at £299.95.

And that’s just the amateur/hobbyist end of the market. At the “serious” end, things rapidly get expensive…

Read on

LATER And you thought I was joking.

Well, see here:

Uber, disruption and Clayton Christensen

This morning’s Observer column:

Over the decades, “disruptive innovation” evolved into Silicon Valley’s highest aspiration. (It also fitted nicely with the valley’s attachment to Joseph Schumpeter’s idea about capitalism renewing itself in waves of “creative destruction”.) And, as often happens with soi-disant Big Ideas, Christensen’s insight has been debased by overuse. This, of course, does not please the Master, who is offended by ignorant jerks miming profundity by plagiarising his ideas.

Which brings us to an interesting article by Christensen and two of his academic colleagues in the current issue of the Harvard Business Review. It’s entitled “What Is Disruptive Innovation?” and in it the authors explain, in the soothing tones used by great minds when dealing with those of inferior intelligence, the essence of Christensen’s original concept. The article is eminently readable and cogent, but contains nothing new, so one begins to wonder what could be the peg for going over this particular piece of ground. And why now?

And then comes the answer: Uber. Christensen & co are obviously irritated by the valley’s conviction that the car-hailing service is a paradigm of disruptive innovation and so they devote a chunk of their article to arguing that while Uber might be disruptive – in the sense of being intensely annoying to the incumbents of the traditional taxi-cab industry – it is not a disruptive innovation in the Christensen sense…

Read on

Let’s turn the TalkTalk hacking scandal into a crisis

Yesterday’s Observer column:

The political theorist David Runciman draws a useful distinction between scandals and crises. Scandals happen all the time in society; they create a good deal of noise and heat, but in the end nothing much happens. Things go back to normal. Crises, on the other hand, do eventually lead to structural change, and in that sense play an important role in democracies.

So a good question to ask whenever something bad happens is whether it heralds a scandal or a crisis. When the phone-hacking story eventually broke, for example, many people (me included) thought that it represented a crisis. Now, several years – and a judicial enquiry – later, nothing much seems to have changed. Sure, there was a lot of sound and fury, but it signified little. The tabloids are still doing their disgraceful thing, and Rebekah Brooks is back in the saddle. So it was just a scandal, after all.

When the TalkTalk hacking story broke and I heard the company’s chief executive say in a live radio interview that she couldn’t say whether the customer data that had allegedly been stolen had been stored in encrypted form, the Runciman question sprang immediately to mind. That the boss of a communications firm should be so ignorant about something so central to her business certainly sounded like a scandal…

Read on

LATER Interesting blog post by Bruce Schneier. He opens with an account of how the CIA’s Director and the software developer Grant Blakeman had their email accounts hacked. Then,

Neither of them should have been put through this. None of us should have to worry about this.

The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.

It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.

He’s right. Only when the costs of insecurity exceed the costs of doing it right will companies invest properly in it. And governments can fix that, quickly, by changing the law. For once, this is something that’s not difficult to do, even in a democracy.