As readers of my newspaper column know, I think that it would be hard to overestimate the significance of Google’s self-driving car. This is not because I expect to find autonomous vehicles on our roads any time soon, but because it signals an urgent need to revise our assumptions about what machines can and cannot do.
As we consider the future of transportation, the implications of innovations like Google’s self-driving car become increasingly apparent. This advancement challenges us to rethink the capabilities of machines and their potential to enhance our daily lives. One area that stands to benefit from this technological evolution is automobile upgrades. With features such as advanced safety systems, connectivity, and enhanced fuel efficiency becoming standard, consumers are encouraged to explore how their vehicles can be optimized. For example, adding a roof rack from Silverado Roof Rack not only increases storage capacity but also enables drivers to carry recreational gear, enhancing their overall driving experience.
Moreover, the push for innovation extends beyond personal vehicles. Manufacturers are responding to consumer demands for more efficient and versatile options, leading to a surge in aftermarket upgrades. These enhancements can range from performance improvements to aesthetic modifications, enabling drivers to customize their vehicles to suit their lifestyles. As we embrace the possibilities of automotive upgrades, it’s crucial to stay informed about the latest trends and technologies. With advancements like autonomous driving on the horizon, the automotive industry is poised for a significant transformation that will redefine our relationship with machines.
If you’d asked me ten years ago what tasks would lie beyond the capacity of computers I would confidently have included driving safely in a crowded urban environment in my list. Brooding on this over the course of the last few months I was coming to think that perhaps this judgement might have been a reflection of my ignorance of robotics at the time. But then, reading Erik Brynjolfsson’s and Andrew McAfee’s new book, The Second Machine Age, I was pointed to a book by Frank Levy and Richard Murnane published in 2004 and entitled The New Division of Labor: How Computers Are Creating the Next Job Market, in which they focussed on the division between human and machine labour.
Levy and Murnane put information processing tasks on a spectrum
At one end are tasks like arithmetic that require only the application of well-understood rules. Since computers are really good at following rules, it follows that they should arithmetic and similar tasks. And not just arithmetic.
For example, a person’s credit score is a good general predictor of whether they’ll pay pack their mortgage as promised… So the decision about whether or not to give a mortgage can be effectively boiled down to a rule.
But Levy and Murnane thought that tasks involving pattern recognition would be beyond computers. And they cite driving a car as a paradigmatic example:
As the driver makes his left turn against traffic, he confronts a wall of images and sounds generated by oncoming cars, traffic lights, storefronts, billboards, trees, and a traffic policeman. Using his knowledge, he must estimate the size and position of each of these objects and the likelihood that they pose a hazard… Articulating this knowledge and embedding it in computer software for all but highly structures situations are at present enormously difficult talks… Computers cannot easily substitute for humans in [jobs like driving].
So I wasn’t the only person a decade ago who doubted that computers could drive.
This is the conjecture that the Google self-driving car refutes. There’s a terrific piece in the New Yorker about the genesis and execution of the Google project which, among other things, illuminates the height of the mountain that the Google team had to climb.
In the beginning, [Sergey] Brin and [Larry] Page presented Thrun’s team with a series of darpa-like challenges. They managed the first in less than a year: to drive a hundred thousand miles on public roads. Then the stakes went up. Like boys plotting a scavenger hunt, Brin and Page pieced together ten itineraries of a hundred miles each. The roads wound through every part of the Bay Area—from the leafy lanes of Menlo Park to the switchbacks of Lombard Street. If the driver took the wheel or tapped the brakes even once, the trip was disqualified. “I remember thinking, How can you possibly do that?” Urmson told me. “It’s hard to game driving through the middle of San Francisco.”
It took the team a year and a half to master Page and Brin’s ten hundred-mile road trips.
The first one ran from Monterey to Cambria, along the cliffs of Highway 1. “I was in the back seat, screaming like a little girl,” Levandowski told me. One of the last started in Mountain View, went east across the Dumbarton Bridge to Union City, back west across the bay to San Mateo, north on 101, east over the Bay Bridge to Oakland, north through Berkeley and Richmond, back west across the bay to San Rafael, south to the mazy streets of the Tiburon Peninsula, so narrow that they had to tuck in the side mirrors, and over the Golden Gate Bridge to downtown San Francisco. When they finally arrived, past midnight, they celebrated with a bottle of champagne. Now they just had to design a system that could do the same thing in any city, in all kinds of weather, with no chance of a do-over. Really, they’d just begun.
The Google car has now driven more than half a million miles without causing an accident, which is, says the New Yorker writer, Burkhard Bilger, about twice as far as the average American driver goes before crashing.
Of course, the computer has always had a human driver to take over in tight spots. Left to its own devices, Thrun says, it could go only about fifty thousand miles on freeways without a major mistake. Google calls this the dog-food stage: not quite fit for human consumption. “The risk is too high,” [Sebastian] Thrun says. “You would never accept it.” The car has trouble in the rain, for instance, when its lasers bounce off shiny surfaces.
Just for the record, this (human) driver also has trouble in the rain. I’ve been driving for over 40 years, and in that time have only had one minor accident (I ran into the car in front at about 5mph when disembarking from a car ferry), so on paper I’m a fairly competent driver. But when driving in Cambridge (a town full of cyclists) on wet dark winter’s nights I’m perpetually worried that I will not see a cyclist who’s not wearing reflective gear or a walker who suddenly rushes across a pedestrian crossing.
So one anecdote in the Bilger piece struck home. A Google engineer told him about driving one night on a dark country road when the car suddenly and inexplicably slowed down.
“I was thinking, What the hell? It must be a bug,” he told me. “Then we noticed the deer walking along the shoulder.” The car, unlike its riders, could see in the dark.
The other morning, after a cyclist suddenly appeared apparently from nowhere on a city crossing, I found myself thinking that I could really use a car with that kind of extra-sensory perception.
And of course this is how the fruits of the Google research and development will first appear — as extra sensors designed to alert human drivers. Volvo already do this in some of their models which detect when a car is veering across motorway lanes and infer that the driver may be getting sleepy. We will see a lot more of this before long. And I, for one, will welcome it.