Q: What do we want? A: mobility, not cars.

This morning’s Observer column:

I’m looking at two photographs of the main street of the small town in which I was born. Both are taken from the same vantage point – looking up the hill to the T-junction at the top. The two photographs are separated by nearly a century: the first was taken in the 1930s, the second sometime in the last few years.

Topographically, the street remains largely unchanged: it’s a straight road with two- or three-storey shops and houses on either side. But the two photographs show completely different streets. The 1930s one shows a spacious thoroughfare, with people walking on the pavements on both sides of the street: here and there, two or three individuals stand in the road, possibly engaged in conversation. The contemporary photograph shows a narrow, congested gorge. The pavements are crowded with pedestrians, but there are no people on the road. In fact, in some places, one cannot even see its surface.

Why the difference between the two photographs? You know the answer: cars, vans and traffic. Both sides of the contemporary street have got lines of parked vehicles, effectively reducing the width of the road by 12ft. And there’s a traffic jam, which means that even the vehicles that aren’t parked are stationary.

This picture is repeated in millions of towns and cities worldwide…

Read on

Don’t let WhatsApp nudge you into sharing your data with Facebook

This morning’s Observer column:

When WhatsApp, the messaging app, launched in 2009, it struck me as one of the most interesting innovations I’d seen in ages – for two reasons. The first was that it seemed beautifully designed from the outset: it was clean, minimalist and efficient; and, secondly, it had a business model that did not depend on advertising. Instead, users got a year free, after which they paid a modest annual subscription.

Better still, the co-founder Jan Koum, seemed to have a very healthy aversion to the surveillance capitalism that underpins the vast revenues of Google, Facebook and co, in which they extract users’ personal data without paying for it, and then refine and sell it to advertisers…

Ah yes. That was then. But now…

Read on

How the Internet is changing our politics


My longish opinion piece on the US election in today’s Observer. (Hint: it’s not all good news.)

Ever since the internet went mainstream in the 1990s people wondered about how it would affect democratic politics. In seeking an answer to the question, we made the mistake that people have traditionally made when thinking about new communications technology: we overestimated the short-term impacts while grievously underestimating the longer-term ones.

The first-order effects appeared in 2004 when Howard Dean, then governor of Vermont, entered the Democratic primaries to seek the party’s nomination for president. What made his campaign distinctive was that he used the internet for fundraising. Instead of the traditional method of tapping wealthy donors, Dean and his online guru, Larry Biddle, turned to the internet and raised about $50m, mostly in the form of small individual donations from 350,000 supporters. By the standards of the time, it was an eye-opening achievement.

In the event, Dean’s campaign imploded when he made an over-excited speech after coming third in the Iowa caucuses – the so-called “Dean scream” which, according to the conventional wisdom of the day, showed that he was too unstable a character to be commander-in-chief. Looked at in the light of the Trump campaign, this is truly weird, for compared with the current Republican candidate, Dean looks like a combination of Spinoza and St Francis of Assisi…

Read on

The long history of ‘cyber’

My Observer piece on Thomas Rid’s alternative history of computing, The Rise of the Machines: the Lost History of Cybernetics:

Where did the “cyber” in “cyberspace” come from? Most people, when asked, will probably credit William Gibson, who famously introduced the term in his celebrated 1984 novel, Neuromancer. It came to him while watching some kids play early video games. Searching for a name for the virtual space in which they seemed immersed, he wrote “cyberspace” in his notepad. “As I stared at it in red Sharpie on a yellow legal pad,” he later recalled, “my whole delight was that it meant absolutely nothing.”

How wrong can you be? Cyberspace turned out to be the space that somehow morphed into the networked world we now inhabit, and which might ultimately prove our undoing by making us totally dependent on a system that is both unfathomably complex and fundamentally insecure. But the cyber- prefix actually goes back a long way before Gibson – to the late 1940s and Norbert Wiener’s book, Cybernetics, Or Control and Communication in the Animal and the Machine, which was published in 1948.

Cybernetics was the term Wiener, an MIT mathematician and polymath, coined for the scientific study of feedback control and communication in animals and machines. As a “transdiscipline” that cuts across traditional fields such as physics, chemistry and biology, cybernetics had a brief and largely unsuccessful existence: few of the world’s universities now have departments of cybernetics. But as Thomas Rid’s absorbing new book, The Rise of the Machines: The Lost History of Cybernetics shows, it has had a long afterglow as a source of mythic inspiration that endures to the present day…

Read on

Time to stop firms sailing under the ‘tech’ flag of convenience

This morning’s Observer column:

The rise and precipitous fall of Theranos is a cautionary tale for our times and is beautifully told by Nick Bilton of Vanity Fair in a fascinating article that is worth reading in full. For me, though, it has a wider significance, because it illustrates a more general problem with corporations that sail under the tech banner, namely their loud insistence that any attempt to regulate them constitutes an attempt by the analogue world to stifle innovation and hold back the digital future.

At the moment, most governments and almost all mainstream media are so dazzled by digital technology that they seem unable to appreciate what’s really going on. What’s happening is that the internet and its associated technologies have morphed from exotic novelties into a general purpose technology (GPT) like mains electricity. That has two implications. The first is that the companies that have mastered the technology are moving out of the tech compound and into the wider world. This is why Apple is planning to move into the automobile business, Tesla is heading for trucking, Google is moving into healthcare, Uber is aiming to eliminate car ownership altogether and Airbnb has the global hotel business in its sights.

The second implication is that, as Anil Dash puts it in an insightful essay, there is no “tech” industry any more…

Read on

Collateral damage and the NSA’s stash of cyberweapons

This morning’s Observer column:

All software has bugs and all networked systems have security holes in them. If you wanted to build a model of our online world out of cheese, you’d need emmental to make it realistic. These holes (vulnerabilities) are constantly being discovered and patched, but the process by which this happens is, inevitably, reactive. Someone discovers a vulnerability, reports it either to the software company that wrote the code or to US-CERT, the United States Computer Emergency Readiness Team. A fix for the vulnerability is then devised and a “patch” is issued by computer security companies such as Kaspersky and/or by software and computer companies. At the receiving end, it is hoped that computer users and network administrators will then install the patch. Some do, but many don’t, alas.

It’s a lousy system, but it’s the only one we’ve got. It has two obvious flaws. The first is that the response always lags behind the threat by days, weeks or months, during which the malicious software that exploits the vulnerability is doing its ghastly work. The second is that it is completely dependent on people reporting the vulnerabilities that they have discovered.

Zero-day vulnerabilities are the unreported ones…

Read on

Hypocrisy on stilts: Facebook (closed) celebrating the Web (open)

This morning’s Observer column:

If there were a Nobel prize for hypocrisy, then its first recipient ought to be Mark Zuckerberg, the Facebook boss. On 23 August, all his 1.7 billion users were greeted by this message: “Celebrating 25 years of connecting people. The web opened up to the world 25 years ago today! We thank Sir Tim Berners-Lee and other internet pioneers for making the world more open and connected.”

Aw, isn’t that nice? From one “pioneer” to another. What a pity, then, that it is a combination of bullshit and hypocrisy. In relation to the former, the guy who invented the web, Tim Berners-Lee, is as mystified by this “anniversary” as everyone else. “Who on earth made up 23 August?” he asked on Twitter. Good question. In fact, as the Guardian pointed out: “If Facebook had asked Berners-Lee, he’d probably have told them what he’s been telling people for years: the web’s 25th birthday already happened, two years ago.”

“In 1989, I delivered a proposal to Cern for the system that went on to become the worldwide web,” he wrote in 2014. It was that year, not this one, that he said we should celebrate as the web’s 25th birthday.

It’s not the inaccuracy that grates, however, but the hypocrisy. Zuckerberg thanks Berners-Lee for “making the world more open and connected”. So do I. What Zuck conveniently omits to mention, though, is that he is embarked upon a commercial project whose sole aim is to make the world more “connected” but less open. Facebook is what we used to call a “walled garden” and now call a silo: a controlled space in which people are allowed to do things that will amuse them while enabling Facebook to monetise their data trails. One network to rule them all. If you wanted a vision of the opposite of the open web, then Facebook is it..

Read on.

Phones, photography and the Snapchat factor

This morning’s Observer column:

Living and working, as I do, in a historic city that is swamped by tourists in the summer, I regularly get the opportunity to do some photo-ethnography. You can tell someone’s age by the kind of camera they are using. Elderly folks are still using point-and-shoot compacts. Middle-aged folks are sporting “prosumer” digital single-lens reflex cameras (DSLRs) from Canon, Nikon, Fuji and Panasonic. But as far as I can see, everyone under the age of 25 is using a smartphone, possibly with the assistance of a selfie stick.

This is partly because the main reason young people take photographs is to post them on social media, and smartphones make that easy to do. But that’s not the whole story. Those who are more serious about photography tend to upload their pictures to photo-hosting services such as Flickr. Guess what the most popular camera for Flickr members is? Apple’s iPhone – by a mile… Read on

Review of ‘The Cyber Effect’

My Observer review of Maria Aiken’s new book.

Note the doctorate after the author’s name; and the subtitle: A Pioneering Cyberpsychologist Explains How Human Behaviour Changes Online; and the potted bio, informing us that “Dr Mary Aiken is the world’s foremost forensic cyberpsychologist” – all clues indicating that this is a book targeted at the US market, another addition to that sprawling genre of books by folks with professional qualifications using pop science to frighten the hoi polloi.

This is a pity, because The Cyber Effect is really rather good and doesn’t need its prevailing tone of relentless self-promotion to achieve its desired effect, which is to make one think about what digital technology is doing to us…

Read on