Reflections on the revolution in automobiles

As readers of my newspaper column know, I think that it would be hard to overestimate the significance of Google’s self-driving car. This is not because I expect to find autonomous vehicles on our roads any time soon, but because it signals an urgent need to revise our assumptions about what machines can and cannot do.

As we consider the future of transportation, the implications of innovations like Google’s self-driving car become increasingly apparent. This advancement challenges us to rethink the capabilities of machines and their potential to enhance our daily lives. One area that stands to benefit from this technological evolution is automobile upgrades. With features such as advanced safety systems, connectivity, and enhanced fuel efficiency becoming standard, consumers are encouraged to explore how their vehicles can be optimized. For example, adding a roof rack from Silverado Roof Rack not only increases storage capacity but also enables drivers to carry recreational gear, enhancing their overall driving experience.

Moreover, the push for innovation extends beyond personal vehicles. Manufacturers are responding to consumer demands for more efficient and versatile options, leading to a surge in aftermarket upgrades. These enhancements can range from performance improvements to aesthetic modifications, enabling drivers to customize their vehicles to suit their lifestyles. As we embrace the possibilities of automotive upgrades, it’s crucial to stay informed about the latest trends and technologies. With advancements like autonomous driving on the horizon, the automotive industry is poised for a significant transformation that will redefine our relationship with machines.

If you’d asked me ten years ago what tasks would lie beyond the capacity of computers I would confidently have included driving safely in a crowded urban environment in my list. Brooding on this over the course of the last few months I was coming to think that perhaps this judgement might have been a reflection of my ignorance of robotics at the time. But then, reading Erik Brynjolfsson’s and Andrew McAfee’s new book, The Second Machine Age, I was pointed to a book by Frank Levy and Richard Murnane published in 2004 and entitled The New Division of Labor: How Computers Are Creating the Next Job Market, in which they focussed on the division between human and machine labour.

Levy and Murnane put information processing tasks on a spectrum

At one end are tasks like arithmetic that require only the application of well-understood rules. Since computers are really good at following rules, it follows that they should arithmetic and similar tasks. And not just arithmetic.

For example, a person’s credit score is a good general predictor of whether they’ll pay pack their mortgage as promised… So the decision about whether or not to give a mortgage can be effectively boiled down to a rule.

But Levy and Murnane thought that tasks involving pattern recognition would be beyond computers. And they cite driving a car as a paradigmatic example:

As the driver makes his left turn against traffic, he confronts a wall of images and sounds generated by oncoming cars, traffic lights, storefronts, billboards, trees, and a traffic policeman. Using his knowledge, he must estimate the size and position of each of these objects and the likelihood that they pose a hazard… Articulating this knowledge and embedding it in computer software for all but highly structures situations are at present enormously difficult talks… Computers cannot easily substitute for humans in [jobs like driving].

So I wasn’t the only person a decade ago who doubted that computers could drive.

This is the conjecture that the Google self-driving car refutes. There’s a terrific piece in the New Yorker about the genesis and execution of the Google project which, among other things, illuminates the height of the mountain that the Google team had to climb.

In the beginning, [Sergey] Brin and [Larry] Page presented Thrun’s team with a series of darpa-like challenges. They managed the first in less than a year: to drive a hundred thousand miles on public roads. Then the stakes went up. Like boys plotting a scavenger hunt, Brin and Page pieced together ten itineraries of a hundred miles each. The roads wound through every part of the Bay Area—from the leafy lanes of Menlo Park to the switchbacks of Lombard Street. If the driver took the wheel or tapped the brakes even once, the trip was disqualified. “I remember thinking, How can you possibly do that?” Urmson told me. “It’s hard to game driving through the middle of San Francisco.”

It took the team a year and a half to master Page and Brin’s ten hundred-mile road trips.

The first one ran from Monterey to Cambria, along the cliffs of Highway 1. “I was in the back seat, screaming like a little girl,” Levandowski told me. One of the last started in Mountain View, went east across the Dumbarton Bridge to Union City, back west across the bay to San Mateo, north on 101, east over the Bay Bridge to Oakland, north through Berkeley and Richmond, back west across the bay to San Rafael, south to the mazy streets of the Tiburon Peninsula, so narrow that they had to tuck in the side mirrors, and over the Golden Gate Bridge to downtown San Francisco. When they finally arrived, past midnight, they celebrated with a bottle of champagne. Now they just had to design a system that could do the same thing in any city, in all kinds of weather, with no chance of a do-over. Really, they’d just begun.

The Google car has now driven more than half a million miles without causing an accident, which is, says the New Yorker writer, Burkhard Bilger, about twice as far as the average American driver goes before crashing.

Of course, the computer has always had a human driver to take over in tight spots. Left to its own devices, Thrun says, it could go only about fifty thousand miles on freeways without a major mistake. Google calls this the dog-food stage: not quite fit for human consumption. “The risk is too high,” [Sebastian] Thrun says. “You would never accept it.” The car has trouble in the rain, for instance, when its lasers bounce off shiny surfaces.

Just for the record, this (human) driver also has trouble in the rain. I’ve been driving for over 40 years, and in that time have only had one minor accident (I ran into the car in front at about 5mph when disembarking from a car ferry), so on paper I’m a fairly competent driver. But when driving in Cambridge (a town full of cyclists) on wet dark winter’s nights I’m perpetually worried that I will not see a cyclist who’s not wearing reflective gear or a walker who suddenly rushes across a pedestrian crossing.

So one anecdote in the Bilger piece struck home. A Google engineer told him about driving one night on a dark country road when the car suddenly and inexplicably slowed down.

“I was thinking, What the hell? It must be a bug,” he told me. “Then we noticed the deer walking along the shoulder.” The car, unlike its riders, could see in the dark.

The other morning, after a cyclist suddenly appeared apparently from nowhere on a city crossing, I found myself thinking that I could really use a car with that kind of extra-sensory perception.

And of course this is how the fruits of the Google research and development will first appear — as extra sensors designed to alert human drivers. Volvo already do this in some of their models which detect when a car is veering across motorway lanes and infer that the driver may be getting sleepy. We will see a lot more of this before long. And I, for one, will welcome it.

Why workers in neoliberal economies are set up to lose the ‘race against the machine’

As readers of this blog (and my Observer column) will know, Erik Brynjolfsson’s and Andrew McAfee’s Race Against the Machine has influenced the way I think about technology and our networked future. This talk by John Hagel presents an insightful gloss on the book’s analysis. Hagel argues that the reason so many modern jobs are so vulnerable to automation is that they have effectively been designed to be vulnerable. They tend to be “tightly scripted,” “highly standardized,” and leave no room for “individual initiative or creativity.” In short, these are the types of jobs that machines can perform much better at than human beings. So what effectively is going on is companies putting “a giant target sign on the backs of American workers”.

So every time you see a manager or administrator proudly unveiling a new paper or online form for imposing bureaucratic order on an organisational process that hitherto had been entrusted to human judgement, you will know where the targets are being affixed.

Beyond gadgetry lies the real technology

This morning’s Observer column.

Cloud computing is a good illustration of why much media commentary about – and public perceptions of – information technology tends to miss the point. By focusing on tangible things – smartphones, tablets, Google Glass, embedded sensors, wearable devices, social networking services, and so on – it portrays technology as gadgetry, much as earlier generations misrepresented (and misunderstood) the significance of solid state electronics by calling portable radios “transistors”.

What matters, in other words, is not the gadget but the underlying technology that makes it possible. Cloud computing is what turns the tablet and the smartphone into viable devices.

Google’s robotics drive

This morning’s Observer column.

You may not have noticed it, but over the past year Google has bought eight robotics companies. Its most recent acquisition is an outfit called Boston Dynamics, which makes the nearest thing to a mechanical mule that you are ever likely to see. It’s called Big Dog and it walks, runs, climbs and carries heavy loads. It’s the size of a large dog or small mule – about 3ft long, 2ft 6in tall, weighs 240lbs, has four legs that are articulated like an animal’s, runs at 4mph, climbs slopes up to 35 degrees, walks across rubble, climbs muddy hiking trails, walks in snow and water, carries a 340lb load, can toss breeze blocks and can recover its balance when walking on ice after absorbing a hefty sideways kick.

You don’t believe me? Well, just head over to YouTube and search for “Boston Dynamics”. There, you will find not only a fascinating video of Big Dog in action, but also confirmation that its maker has a menagerie of mechanical beasts, some of them humanoid in form, others resembling predatory animals. And you will not be surprised to learn that most have been developed on military contracts, including some issued by Darpa, the Defence Advanced Research Projects Agency, the outfit that originally funded the development of the internet.

Should we be concerned about this? Yes, but not in the way you might first think…

Read on…

Peering into the future

I was very struck by this piece by Zachary M. Seward in my Quartz Weekend Briefing.

(En passant, Quartz has been one of the great discoveries of 2013.)

Half a century ago, author Isaac Asimov peered into the future: “What will the World’s Fair of 2014 be like?” he wrote in the New York Times. “I don’t know, but I can guess.”

With the exception of assuming the World’s Fair would still be around, Asimov was remarkably prescient. His essay forecast everything from self-driving cars (“Much effort will be put into the designing of vehicles with ‘Robot-brains’”) to Keurig machines (“Kitchen units will be devised that will prepare ‘automeals,’ heating water and converting it to coffee”) to photochromic lenses (“The degree of opacity of the glass may even be made to alter automatically in accordance with the intensity of the light falling upon it”).

But Asimov’s most impressive prophecy had less to do with gadgets than perceiving what that progress would mean for society. ”The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being,” he wrote. Later, he added, ”The lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.”

Heading into 2014, the so-called disruptive technologies we write about frequently at Quartz—from robotics to 3D printing to drones—are magical, yes, and inevitable, too. They also carry with them a specter of loss. Lost jobs, mostly, but also a sense of being lost. Where do we go from here? What is society’s replacement for factory work, clerical work, retail work? The honest answer is that we have none, at least for now.

The US may never return to full employment. Ravaged economies in Europe are putting an entire generation of youth at risk. China can’t put its college graduates to work. Jobs simply aren’t materializing.

Predictions are a fool’s errand. (Asimov assumed we would have moon colonies.) But if we had to make just one forecast, it would be that, in 2014, the reality of this loss of work will hit the world hard. The bright side is that we may finally start to confront the issue and start working on a new economy with jobs to spare.

Robot or not?

This (from a link sent by Andrew Ingram, for which many thanks) is fascinating.

Recently, Time Washington Bureau Chief Michael Scherer received a phone call from an apparently bright and engaging woman asking him if he wanted a deal on his health insurance. But he soon got the feeling something wasn’t quite right.

After asking the telemarketer point blank if she was a real person or a computer-operated robot, she chuckled charmingly and insisted she was real. Looking to press the issue, Scherer asked her a series of questions, which she promptly failed. Such as, “What vegetable is found in tomato soup?” To which she responded by saying she didn’t understand the question. When asked what day of the week it was yesterday, she complained of a bad connection (ah, the oldest trick in the book).

Here’s the recording:

Google vs Apple: a contrast

In the last year, Google has bought just about every small company (i.e. eight companies) doing interesting work in robotics — including Boston Dynamics, whose creature is shown in this video.

In the same period, Apple has, er, instituted a share-buyback program and brought out some incrementally-improved products.

So here’s my question (which is prompted by something Jason Calcanis said): which company is focussed on the distant future? The obvious inference seems to be that Apple can’t think of anything really radical to do with its mountain of cash.

UPDATE: Charles Arthur points out that, according to Wikipedia, Apple acquired ten companies in 2013, of which three are involved in mapping and two in semiconductors. So maybe they are up to something.

Why do governments screw up IT projects?

This morning’s Observer column:

This is a tale of two cities – Washington and London – and of the governments that rule from them. What links the pair is the puzzling failure of said governments to manage two vital IT projects. In both cases, the projects are critically important for the political credibility of their respective administrations. And yet they are both in trouble for reasons that most engineering and computer science undergraduates could have spotted.

So here’s the puzzle: how is it that governments stuffed with able and conscientious civil servants screw up so spectacularly whenever IT is involved?

Let us start with Obamacare, the US president’s landmark reform of his country’s dysfunctional healthcare system…

Read on…

Beyond the bubble

Yesterday’s Observer column.

The bad news, therefore, is that we’re in a new technology bubble. If you are impolite enough to mention this in Silicon Valley at the moment, however, then people will cut you dead. That’s par for the bubble course. The folks who are caught up in one do not appreciate well-meaning attempts to rain on their parade. When the Celtic tiger was roaring in my beloved homeland, for example, a lone economist named Morgan Kelly dared to say that the tiger had no fur – and was roundly abused for his pains.

The good news is that when the current technology bubble pops there will be less collateral damage than last time. This is largely because it costs so much less to start a technology company nowadays and the funding models (and therefore the investment risks) are different…