Talking cats and corporate social responsibility

This morning’s Observer column.

It’s 4.30 on a gloomy winter’s afternoon. I’m sitting with my grandson having one of those conversations in which grandsons explain complicated stuff to their grandads. He is four years old, omniscient in the way that four-year-olds are, and tolerant of my ignorance of important matters.

The conversation turns to computing and he inquires whether I have Talking Tom Cat on my iPad. “No,” I say. “What is it?” He explains that it’s a cool game that his grandma has on her iPad. There is a cat called Tom who listens to what you say to him and then repeats it in a funny voice. Also there’s a dog who does funny things.

So I dig out my iPad and we head over to the app store where, sure enough, Talking Tom Cat 2 is available as a free download. A few minutes later it’s running on my iPad…

Read on to find out what happens next.

Are referenda a good way to make national decisions?

Robert Cooper is in no doubt about this.

This is easy to answer: no. It is shameful that few political leaders are ready to say so. Democracy is not just about voting. It is also about debate and about responsibility.

Debate is necessary to understand complex issues. We invented representative democracy because debate is time-consuming and it is not practical in a modern state to assemble the whole population in market squares to debate issues. (In Athens the people were able to do this because citizens were few and they had helots and women to do the work.) Under the system of government “by the people”, the people choose the government and then hold it accountable when they don’t like what it does. If referendums are “more democratic” than decisions by parliament, why not make decisions about taxation or electricity prices by referendum, as has been tried in California (and then the lights went out)? When bad decisions are made in this way, who takes responsibility?

For years, both parties resisted calls for  a referendum on capital punishment because they feared there would be a majority in favour of it. Over time and through long debates, parliament became convinced by the evidence that capital punishment had no deterrent value and that innocent people had been hanged. Yet they feared that, in a referendum, the debate would be shallow and voters would follow prejudice rather than the evidence.

The referendum on the Alternative Vote (AV) showed how difficult it can be to generate a serious debate on matters that are important but complicated where mastery of the detail demands time.

Yep.

Dan Ellsberg on Edward Snowden

Hi Reddit,

I am Daniel Ellsberg, the former State and Defense Department official who leaked 7,000 pages of Top Secret documents on the Vietnam War to the New York Times and 19 other papers in 1971. Recently, I co-founded the Freedom of the Press Foundation. Yesterday, we announced Edward Snowden, NSA whistleblower, will be joining our board of directors!

Here’s our website: https://pressfreedomfoundation.org

I believe that Edward Snowden has done more to support and defend the Constitution—in particular, the First and Fourth Amendments—than any member of Congress or any other employee or official of the Executive branch, up to the president: every one of whom took that same oath, which many of them have violated.

Ask me anything.

Here’s proof it’s me: https://twitter.com/DanielEllsberg/status/423520429676826624

If you want to take action against mass surveillance, visit TheDayWeFightBack and demand reform in Congress on February 11th.

Source

Mass surveillance: an “insurance policy”

I was struck by this passage in an admirable blog post by Ray Corrigan.

The latest from the NSA is that they now seem to be admitting (in spite of previous claims that this mass surveillance stopped 54 major terror attacks it didn’t really stop any, but may possibly have provided secondary supportive evidence in relation to one) that the best argument they can come up with is mass data collection might be useful as an “insurance policy”. What?! An insurance policy?! The infrastructure of mass surveillance might be useful in the future, somehow, to someone?

The relevant passage in the NSA testimony reads:

While Inglis conceded in his NPR interview that at most one terrorist attack might have been foiled by NSA’s bulk collection of all American phone data – a case in San Diego that involved a money transfer from four men to al-Shabaab in Somalia – he described it as an “insurance policy” against future acts of terrorism.

“I’m not going to give that insurance policy up, because it’s a necessary component to cover a seam that I can’t otherwise cover,” Inglis said.

Reflections on the revolution in automobiles

As readers of my newspaper column know, I think that it would be hard to overestimate the significance of Google’s self-driving car. This is not because I expect to find autonomous vehicles on our roads any time soon, but because it signals an urgent need to revise our assumptions about what machines can and cannot do.

If you’d asked me ten years ago what tasks would lie beyond the capacity of computers I would confidently have included driving safely in a crowded urban environment in my list. Brooding on this over the course of the last few months I was coming to think that perhaps this judgement might have been a reflection of my ignorance of robotics at the time. But then, reading Erik Brynjolfsson’s and Andrew McAfee’s new book, The Second Machine Age, I was pointed to a book by Frank Levy and Richard Murnane published in 2004 and entitled The New Division of Labor: How Computers Are Creating the Next Job Market, in which they focussed on the division between human and machine labour.

Levy and Murnane put information processing tasks on a spectrum

At one end are tasks like arithmetic that require only the application of well-understood rules. Since computers are really good at following rules, it follows that they should arithmetic and similar tasks. And not just arithmetic.

For example, a person’s credit score is a good general predictor of whether they’ll pay pack their mortgage as promised… So the decision about whether or not to give a mortgage can be effectively boiled down to a rule.

But Levy and Murnane thought that tasks involving pattern recognition would be beyond computers. And they cite driving a car as a paradigmatic example:

As the driver makes his left turn against traffic, he confronts a wall of images and sounds generated by oncoming cars, traffic lights, storefronts, billboards, trees, and a traffic policeman. Using his knowledge, he must estimate the size and position of each of these objects and the likelihood that they pose a hazard… Articulating this knowledge and embedding it in computer software for all but highly structures situations are at present enormously difficult talks… Computers cannot easily substitute for humans in [jobs like driving].

So I wasn’t the only person a decade ago who doubted that computers could drive.

This is the conjecture that the Google self-driving car refutes. There’s a terrific piece in the New Yorker about the genesis and execution of the Google project which, among other things, illuminates the height of the mountain that the Google team had to climb.

In the beginning, [Sergey] Brin and [Larry] Page presented Thrun’s team with a series of darpa-like challenges. They managed the first in less than a year: to drive a hundred thousand miles on public roads. Then the stakes went up. Like boys plotting a scavenger hunt, Brin and Page pieced together ten itineraries of a hundred miles each. The roads wound through every part of the Bay Area—from the leafy lanes of Menlo Park to the switchbacks of Lombard Street. If the driver took the wheel or tapped the brakes even once, the trip was disqualified. “I remember thinking, How can you possibly do that?” Urmson told me. “It’s hard to game driving through the middle of San Francisco.”

It took the team a year and a half to master Page and Brin’s ten hundred-mile road trips.

The first one ran from Monterey to Cambria, along the cliffs of Highway 1. “I was in the back seat, screaming like a little girl,” Levandowski told me. One of the last started in Mountain View, went east across the Dumbarton Bridge to Union City, back west across the bay to San Mateo, north on 101, east over the Bay Bridge to Oakland, north through Berkeley and Richmond, back west across the bay to San Rafael, south to the mazy streets of the Tiburon Peninsula, so narrow that they had to tuck in the side mirrors, and over the Golden Gate Bridge to downtown San Francisco. When they finally arrived, past midnight, they celebrated with a bottle of champagne. Now they just had to design a system that could do the same thing in any city, in all kinds of weather, with no chance of a do-over. Really, they’d just begun.

The Google car has now driven more than half a million miles without causing an accident, which is, says the New Yorker writer, Burkhard Bilger, about twice as far as the average American driver goes before crashing.

Of course, the computer has always had a human driver to take over in tight spots. Left to its own devices, Thrun says, it could go only about fifty thousand miles on freeways without a major mistake. Google calls this the dog-food stage: not quite fit for human consumption. “The risk is too high,” [Sebastian] Thrun says. “You would never accept it.” The car has trouble in the rain, for instance, when its lasers bounce off shiny surfaces.

Just for the record, this (human) driver also has trouble in the rain. I’ve been driving for over 40 years, and in that time have only had one minor accident (I ran into the car in front at about 5mph when disembarking from a car ferry), so on paper I’m a fairly competent driver. But when driving in Cambridge (a town full of cyclists) on wet dark winter’s nights I’m perpetually worried that I will not see a cyclist who’s not wearing reflective gear or a walker who suddenly rushes across a pedestrian crossing.

So one anecdote in the Bilger piece struck home. A Google engineer told him about driving one night on a dark country road when the car suddenly and inexplicably slowed down.

“I was thinking, What the hell? It must be a bug,” he told me. “Then we noticed the deer walking along the shoulder.” The car, unlike its riders, could see in the dark.

The other morning, after a cyclist suddenly appeared apparently from nowhere on a city crossing, I found myself thinking that I could really use a car with that kind of extra-sensory perception.

And of course this is how the fruits of the Google research and development will first appear — as extra sensors designed to alert human drivers. Volvo already do this in some of their models which detect when a car is veering across motorway lanes and infer that the driver may be getting sleepy. We will see a lot more of this before long. And I, for one, will welcome it.

The antisocial side of geek elitism

This morning’s Observer column.

Just under a year ago, Rebecca Solnit, a writer living in San Francisco, wrote a sobering piece in the London Review of Books about the Google Bus, which she viewed as a proxy for the technology industry just down the peninsula in Palo Alto, Mountain View and Cupertino.

“The buses roll up to San Francisco’s bus stops in the morning and evening,” she wrote, “but they are unmarked, or nearly so, and not for the public. They have no signs or have discreet acronyms on the front windshield, and because they also have no rear doors they ingest and disgorge their passengers slowly, while the brightly lit funky orange public buses wait behind them. The luxury coach passengers ride for free and many take out their laptops and begin their work day on board; there is of course Wi-Fi. Most of them are gleaming white, with dark-tinted windows, like limousines, and some days I think of them as the spaceships on which our alien overlords have landed to rule over us.”

The aesthetics of sloooooow motion photography

This astonishing, haunting video is the work of an extraordinary photographic artist, Adam Magyar. There’s a terrific profile of him by Joshua Hammer on Matter. For this video he persuaded the German manufacturer Optronis to lend him one of its $16,000, high-performance industrial video cameras—used in crash tests and robotic-arm studies. The Optronis shoots high-resolution images at astonishing speeds: up to 100,000 frames per second, compared to 24 frames per second in a traditional film camera.

Instead of standing on a platform shooting passengers speeding past him, Magyar now positioned himself inside the moving subway car, recording stationary commuters on the platform as train and camera rolled into the station. Again, the ghost of Einstein permeates these images, and again, he was warping time: Magyar shot the footage at 56 times normal speed, turning 12-second blurs into nearly 12-minute films of excruciating slowness.

Amazing stuff.

Why workers in neoliberal economies are set up to lose the ‘race against the machine’

As readers of this blog (and my Observer column) will know, Erik Brynjolfsson’s and Andrew McAfee’s Race Against the Machine has influenced the way I think about technology and our networked future. This talk by John Hagel presents an insightful gloss on the book’s analysis. Hagel argues that the reason so many modern jobs are so vulnerable to automation is that they have effectively been designed to be vulnerable. They tend to be “tightly scripted,” “highly standardized,” and leave no room for “individual initiative or creativity.” In short, these are the types of jobs that machines can perform much better at than human beings. So what effectively is going on is companies putting “a giant target sign on the backs of American workers”.

So every time you see a manager or administrator proudly unveiling a new paper or online form for imposing bureaucratic order on an organisational process that hitherto had been entrusted to human judgement, you will know where the targets are being affixed.

Nobody’s Son

Beautiful piece in the New Yorker by Mark Slouka about the death of his father. Stopped me in my tracks today. Maybe this will explain why:

It needs to be said: in some strange way, my father’s death has made the thought of dying easier. The door opened, and he walked through it successfully; the land of the dead is a peopled place for me now because he’s there, somewhere. And, because he’s done it, because he’s pulled this thing off, it’s become conceivable for me as well. Hell, if the old man can do it, I can do it.

It’s an unexpected gift, this release from fear—it’s like a gentling touch, a father’s voice. He lifts you onto his lap, presses your head to his chest, pets your hair. You can hear his heart. Sh-h-h, sh-h-h, it’s O.K., it’s O.K., it’s O.K., he says as your sobs begin to slow, then catch, then slow some more. Don’t cry. There’s nothing to be afraid of, nothing at all. We all must die. Accept, accept.

And I just might, except that this is not my father’s voice, which is as alive to me as anything in this world. This is something very different, a flowering as deceptive as cancer, blooming in the light of his loss. A flowering fed on self-pity and orphaned love.

Accept? My father was irritated by death, chafed at and ignored it. It was an annoyance, an inconvenience. He fought it to a standstill, refused the morphine of the ages. Harps and virgins? Please. Oblivion would do fine, thank you. In the meantime, there was injustice and stupidity to perforate, cruelty to expose, the absurd and gorgeous carnival of the world to watch going by.

“What is this sickly sentimentality?” he’d say to me, “this weakening at the knees? I was old. I died. It’s to be regretted—certainly by me—but so what? Think of me when you need to, that’s more than enough. Now pour me another and get out of here—don’t you have somewhere to go?”

Six months in, the heart, the soul, the spine, begin to regenerate. Slowly. In moments of weakness, his voice saves me, which is appropriate. He was my father. Is.