The (non)sharing economy

This morning’s Observer column:

A euphemism is a polite way of obscuring an uncomfortable or unpleasant truth. So pornography becomes adult entertainment, genocide becomes ethnic cleansing, sacking someone becomes letting him (or her) go. People pass away rather than die; toilets become rest rooms; CCTV cameras monitor public and private spaces for our comfort and safety; and shell shock evolved into battle fatigue before finally winding up as post-traumatic stress disorder – which is really a way of disguising the awkward fact that killing people in cold blood can do very bad things to your psyche.

The tech industry is also addicted to euphemism. Thus the ubiquitous, unfair, grotesquely unbalanced contract which gives an internet corporation all the rights and the user almost none is called an end-user licence agreement. Computers that have been illegally hacked are “pwned”. The wholesale hoovering-up of personal data by internet companies (and the state) is never called by its real name, which is surveillance. And so on.

But the word that is most subverted by the tech industry is share…

Read on

Hunting the Qubit

Today’s Observer column:

We live in a universe that virtually none of us will ever understand. Physicists, however, refuse to be daunted by this and have been casting about for ways of putting these quantum properties to practical use. In the process, their gaze alighted on that fundamental building block of digital technology, the humble binary digit (or “bit”) in which all digital information is encoded. In the Newtonian (ie non-quantum) world, a bit can take only one of two values – one or zero. But at the quantum level, superposition means that a quantum bit – a qubit – could have multiple values (one, zero and a superposition of one and zero) at the same time. Which means that a computer based on quantum principles would be much, much faster than a conventional, silicon-based one. Various outfits have been trying to build one.

The results are controversial but intriguing…

Read on

Algorithmic power and programmers’ ethics

We know that power corrupts. But what about algorithmic power? This morning’s Observer column:

There is a direction of travel here – one that is taking us towards what an American legal scholar, Frank Pasquale, has christened the “black box society”. You might think that the subtitle – “the secret algorithms that control money and information” – says it all, except that it’s not just about money and information but increasingly about most aspects of contemporary life, at least in industrialised countries. For example, we know that Facebook algorithms can influence the moods and the voting behaviour of the service’s users. And we also know that Google’s search algorithms can effectively render people invisible. In some US cities, algorithms determine whether you are likely to be stopped and searched in the street. For the most part, it’s an algorithm that decides whether a bank will seriously consider your application for a mortgage or a loan. And the chances are that it’s a machine-learning or network-analysis algorithm that flags internet or smartphone users as being worthy of further examination. Uber drivers may think that they are working for themselves, but in reality they are managed by an algorithm. And so on.

Without us noticing it, therefore, a new kind of power – algorithmic power – has arrived in our societies. And for most citizens, these algorithms are black boxes – their inner logic is opaque to us. But they have values and priorities embedded in them, and those values are likewise opaque to us: we cannot interrogate them.

This poses two questions. First of all, who has legal responsibility for the decisions made by algorithms? The company that runs the services that are enabled by them? Maybe – depending on how smart their lawyers are.

But what about the programmers who wrote the code? Don’t they also have some responsibilities? Pasquale reports that some micro-targeting algorithms (the programs that decide what is shown in your browser screen, such as advertising) categorise web users into categories which include “probably bipolar”, “daughter killed in car crash”, “rape victim”, and “gullible elderly”. A programmer wrote that code. Did he (for it was almost certainly a male) not have some ethical qualms about his handiwork?

Read on

‘Smart’ homes? Not yet

My Observer comment piece on what the Internet of Things looks like when it’s at home:

There is a technological juggernaut heading our way. It’s called the Internet of Things (IoT). For the tech industry, it’s the Next Big Thing, alongside big data, though in fact that pair are often just two sides of the same coin. The basic idea is that since computing devices are getting smaller and cheaper, and wireless network technology is becoming ubiquitous, it will soon be feasible to have trillions of tiny, networked computers embedded in everything. They can sense changes, turning things on and off, making decisions about whether to open a door or close a valve or order fresh supplies of milk, you name it, the computers communicating with one another and shipping data to server farms all over the place.

As ever with digital technology, there’s an underlying rationality to lots of this. The IoT could make our lives easier and our societies more efficient. If parking bays could signal to nearby cars that they are empty, then the nightmarish task of finding a parking place in crowded cities would be eased. If every river in the UK could tweet its level every few minutes, then we could have advance warning of downstream floods in time to alert those living in their paths. And so on.

But that kind of networking infrastructure takes time to build, so the IoT boys (and they are mostly boys, still) have set their sights closer to home, which is why we are beginning to hear a lot about “smart” homes. On further examination, this turns out mostly to mean houses stuffed with networked kit…

Read on

In the bleak midwinter, droning on

This morning’s Observer column:

Well, Black Friday has come and gone and this columnist has missed the boat – again. But if the marketing mythology is to be believed, countless millions of our better-organised fellow citizens have been dutifully clicking and purchasing.

This year, however, is slightly different because something new will have appeared on the wishlists of tech-savvy shoppers: drones. A quick search for them on Amazon.co.uk brought up 46 different models before I got tired of scrolling, ranging in price from under £20 to over £1,200. And over at the Apple store, they’re selling the Parrot AR.Drone 2.0 Power Edition Quadricopter, a snip at £299.95.

And that’s just the amateur/hobbyist end of the market. At the “serious” end, things rapidly get expensive…

Read on

LATER And you thought I was joking.

Well, see here:

Uber, disruption and Clayton Christensen

This morning’s Observer column:

Over the decades, “disruptive innovation” evolved into Silicon Valley’s highest aspiration. (It also fitted nicely with the valley’s attachment to Joseph Schumpeter’s idea about capitalism renewing itself in waves of “creative destruction”.) And, as often happens with soi-disant Big Ideas, Christensen’s insight has been debased by overuse. This, of course, does not please the Master, who is offended by ignorant jerks miming profundity by plagiarising his ideas.

Which brings us to an interesting article by Christensen and two of his academic colleagues in the current issue of the Harvard Business Review. It’s entitled “What Is Disruptive Innovation?” and in it the authors explain, in the soothing tones used by great minds when dealing with those of inferior intelligence, the essence of Christensen’s original concept. The article is eminently readable and cogent, but contains nothing new, so one begins to wonder what could be the peg for going over this particular piece of ground. And why now?

And then comes the answer: Uber. Christensen & co are obviously irritated by the valley’s conviction that the car-hailing service is a paradigm of disruptive innovation and so they devote a chunk of their article to arguing that while Uber might be disruptive – in the sense of being intensely annoying to the incumbents of the traditional taxi-cab industry – it is not a disruptive innovation in the Christensen sense…

Read on

Let’s turn the TalkTalk hacking scandal into a crisis

Yesterday’s Observer column:

The political theorist David Runciman draws a useful distinction between scandals and crises. Scandals happen all the time in society; they create a good deal of noise and heat, but in the end nothing much happens. Things go back to normal. Crises, on the other hand, do eventually lead to structural change, and in that sense play an important role in democracies.

So a good question to ask whenever something bad happens is whether it heralds a scandal or a crisis. When the phone-hacking story eventually broke, for example, many people (me included) thought that it represented a crisis. Now, several years – and a judicial enquiry – later, nothing much seems to have changed. Sure, there was a lot of sound and fury, but it signified little. The tabloids are still doing their disgraceful thing, and Rebekah Brooks is back in the saddle. So it was just a scandal, after all.

When the TalkTalk hacking story broke and I heard the company’s chief executive say in a live radio interview that she couldn’t say whether the customer data that had allegedly been stolen had been stored in encrypted form, the Runciman question sprang immediately to mind. That the boss of a communications firm should be so ignorant about something so central to her business certainly sounded like a scandal…

Read on

LATER Interesting blog post by Bruce Schneier. He opens with an account of how the CIA’s Director and the software developer Grant Blakeman had their email accounts hacked. Then,

Neither of them should have been put through this. None of us should have to worry about this.

The problem is a system that makes this possible, and companies that don’t care because they don’t suffer the losses. It’s a classic market failure, and government intervention is how we have to fix the problem.

It’s only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

The government should not mandate how a company secures our data; that will move the responsibility to the government and stifle innovation. Instead, government should establish minimum standards for results, and let the market figure out how to do it most effectively. It should allow individuals whose information has been exposed sue for damages. This is a model that has worked in all other aspects of public safety, and it needs to be applied here as well.

He’s right. Only when the costs of insecurity exceed the costs of doing it right will companies invest properly in it. And governments can fix that, quickly, by changing the law. For once, this is something that’s not difficult to do, even in a democracy.

The end of private reading is nigh

This morning’s Observer column about the Investigatory Powers bill:

The draft bill proposes that henceforth everyone’s clickstream – the URLs of every website one visits – is to be collected and stored for 12 months and may be inspected by agents of the state under certain arrangements. But collecting the stream will be done without any warrant. To civil libertarians who are upset by this new power, the government’s response boils down to this: “Don’t worry, because we’re just collecting the part of the URL that specifies the web server and that’s just ‘communications data’ (aka metadata); we’re not reading the content of the pages you visit, except under due authorisation.”

This is the purest cant, for two reasons…

Read on

Amazon’s Cloud Nine

This morning’s Observer column:

In 1999, Andy Grove, then the CEO of Intel, was widely ridiculed for declaring that “in five years’ time there won’t be any internet companies. All companies will be internet companies or they will be dead.” What he meant was that anybody who aspired to be in business in 2004 would have to deal with the internet in one way or another, just as they relied on electricity. And he was right; that’s what a GPT is like: it’s pervasive.

But digital technology differs in four significant ways from earlier GPTs. First of all, it is characterised by zero – or near-zero – marginal costs: once you’ve made the investment needed to create a digital good, it costs next to nothing to roll out and distribute a million (or indeed a billion) copies. Second, digital technology can exploit network effects at much greater speeds than the GPTs of the past. Third, almost everything that goes on in digital networks is governed by so-called power law distributions, in which a small number of actors (sites, companies, publishers…) get most of the action, while everyone else languishes in a “long tail”. Finally, digital technology sometimes gives rise to technological “lock-in”, where the proprietary standards of one company become the de facto standards for an entire industry. Thus, Microsoft once had that kind of lock-in on the desktop computer market: if you wanted to be in business you could have any kind of computer you wanted – so long as it ran Windows…

Read on

LATER Just came on this — which makes the same point about Amazon’s AWS, only more forcefully.

Just a pin-prick? Or a big deal?

This morning’s Observer column:

If you have ever been a hospital patient, then you will know the drill: before anything else happens, you have to have your “bloods done”. You roll up your sleeve, a phlebotomist searches your lower arm for a suitable vein, inserts a sterilised needle and extracts a blood sample that is then labelled and sent off to a lab for analysis.

Depending on your condition, this can happen a lot. If you are a cancer sufferer on chemotherapy, for example, you may come to think of your arms as pincushions and you sometimes have to watch in dismay as the phlebotomist hunts up and down for a suitable vein. Although the analysis of blood samples is now highly automated and efficient, at the sample-collection end it’s very time consuming and resource intensive.

The mind boggles at the amount the National Health Service must spend on it every year. And yet it is an absolutely central part of modern healthcare: blood tests are on the critical path of a very large number of diagnostic and treatment regimes.

Enter Theranos, a California startup that has (or claims to have) developed novel approaches to laboratory-based diagnostic blood tests using the science of microfluidics, which concerns the manipulation of tiny amounts of fluids (think ink-jet printers, for example)…

Read on.