Biased machines may be easier to fix than biased humans

This morning’s Observer column:

One of the things that really annoys AI researchers is how supposedly “intelligent” machines are judged by much higher standards than are humans. Take self-driving cars, they say. So far they’ve driven millions of miles with very few accidents, a tiny number of them fatal. Yet whenever an autonomous vehicle kills someone there’s a huge hoo-ha, while every year in the US nearly 40,000 people die in crashes involving conventional vehicles.

Likewise, the AI evangelists complain, everybody and his dog (this columnist included) is up in arms about algorithmic bias: the way in which automated decision-making systems embody the racial, gender and other prejudices implicit in the data sets on which they were trained. And yet society is apparently content to endure the astonishing irrationality and capriciousness of much human decision-making.

If you are a prisoner applying for parole in some jurisdictions, for example, you had better hope that the (human) judge has just eaten when your case comes up…

Read on

Ideology and tech evangelism

This morning’s Observer column:

For my sins, I get invited to give a few public lectures every year. Mostly, the topic on which I’m asked to speak is the implications for democracy of digital technology as it has been exploited by a number of giant US corporations. My general argument is that those implications are not good, and I try to explain why I think this is the case. When I’ve finished, there is usually some polite applause before the Q&A begins. And always one particular question comes up. “Why are you so pessimistic?”

The interesting thing about that is the way it reveals as much about the questioner as it does about the lecturer. All I have done in my talk, after all, is to lay out the grounds for concern about what networked technology is doing to our democracies. Mostly, my audiences recognise those grounds as genuine – indeed as things about which they themselves have been fretting. So if someone regards a critical examination of these issues as “pessimistic” then it suggests that they have subconsciously imbibed the positive narrative of tech evangelism.

An ideology is what determines how you think even when you don’t know you’re thinking. Tech evangelism is an example. And one of the functions of an ideology is to stop us asking awkward questions…

Read on

When the medium is the message

A couple of weeks ago my Observer column was about podcasting and the pioneering role that Dave Winer played in its evolution. Since Dave often includes a short podcast on his daily blog, I thought I should include an audio version of that particular column. Here it is:

(It’s only five minutes long, but the embed player doesn’t seem to realise that.)

Podcasting: will it succumb to the Wu cycle?

This morning’s Observer column:

I’ve just been listening to what I think of as the first real podcast. The speaker is Dave Winer, the software genius whom I wrote about in October. He pioneered blogging and played a key role in the evolution of the RSS site-syndication technology that enabled users and applications to access updates to websites in a standardised, computer-readable format.

And the date of this podcast? 11 June, 2004 – 15 years ago; which rather puts into context the contemporary excitement about this supposedly new medium that is now – if you believe the hype – taking the world by storm. With digital technology it always pays to remember that it’s older than you think.

When he started doing it, Winer called it “audioblogging” and if you listen to his early experiments you can see why. They’re relaxed, friendly, digressive, unpretentious and insightful – in other words an accurate reflection of the man himself and of his blog. He thought of them as “morning coffee notes” – audio meditations about what was on his mind first thing in the morning…

Read on

Tech commentary and gender

This morning’s Observer column:

Reading the observations of these three women brought to the surface a thought that’s been lurking at the back of my mind for years. It is that the most trenchant and perceptive critiques of digital technology – and particularly of the ways in which it has been exploited by tech companies – have come from female commentators. The thought originated ages ago as a vague impression, then morphed into an intuitive correlation and eventually surfaced as a conjecture that could be examined.

So I spent a few hours going through a decade’s-worth of electronic records – reprints, notes and links. What I found is an impressive history of female commentary and a gallery of more than 20 formidable critics…

Read on

Can the planet afford machine learning as well as Bitcoin?

This morning’s Observer column:

There is, alas, no such thing as a free lunch. This simple and obvious truth is invariably forgotten whenever irrational exuberance teams up with digital technology in the latest quest to “change the world”. A case in point was the bitcoin frenzy, where one could apparently become insanely rich by “mining” for the elusive coins. All you needed was to get a computer to solve a complicated mathematical puzzle and – lo! – you could earn one bitcoin, which at the height of the frenzy was worth $19,783.06. All you had to do was buy a mining kit (or three) from Amazon, plug it in and become part of the crypto future.

The only problem was that mining became progressively more difficult the closer we got to the maximum number of bitcoins set by the scheme and so more and more computing power was required. Which meant that increasing amounts of electrical power were needed to drive the kit. Exactly how much is difficult to calculate, but one estimate published in July by the Judge Business School at the University of Cambridge suggested that the global bitcoin network was then consuming more than seven gigwatts of electricity. Over a year, that’s equal to around 64 terawatt-hours (TWh), which is 8 TWh more than Switzerland uses annually. So each of those magical virtual coins turns out to have a heavy environmental footprint.

At the moment, much of the tech world is caught up in a new bout of irrational exuberance. This time, it’s about machine learning, another one of those magical technologies that “change the world”…

Read on

DNA databases are special

This morning’s Observer column:

Last week, at a police convention in the US, a Florida police officer revealed he had obtained a warrant to search the GEDmatch database of a million genetic profiles uploaded by users of the genealogy research site. Legal experts said this appeared to be the first time an American judge had approved such a warrant.

“That’s a huge game-changer,” observed Erin Murphy, a law professor at New York University. “The company made a decision to keep law enforcement out and that’s been overridden by a court. It’s a signal that no genetic information can be safe.”

At the end of the cop’s talk, he was approached by many officers from other jurisdictions asking for a copy of the successful warrant.

Apart from medical records, your DNA profile is the most sensitive and personal data imaginable. In some ways, it’s more revealing, because it can reveal secrets you don’t know you’re keeping, such as siblings (and sometimes parents) of whom you were unaware…

Read on

How “Don’t Be Evil” panned out

My Observer review of Rana Foroohar’s new book about the tech giants and their implications for our world.

“Don’t be evil” was the mantra of the co-founders of Google, Sergey Brin and Larry Page, the graduate students who, in the late 1990s, had invented a groundbreaking way of searching the web. At the time, one of the things the duo believed to be evil was advertising. There’s no reason to doubt their initial sincerity on this matter, but when the slogan was included in the prospectus for their company’s flotation in 2004 one began to wonder what they were smoking. Were they really naive enough to believe that one could run a public company on a policy of ethical purity?

The problem was that purity requires a business model to support it and in 2000 the venture capitalists who had invested in Google pointed out to the boys that they didn’t have one. So they invented a model that involved harvesting users’ data to enable targeted advertising. And in the four years between that capitulation to reality and the flotation, Google’s revenues increased by nearly 3,590%. That kind of money talks.
Sign up for Bookmarks: discover new books in our weekly email
Read more

Rana Foroohar has adopted the Google mantra as the title for her masterful critique of the tech giants that now dominate our world…

Read on

What if AI could write like Hemingway?

This morning’s Observer column:

Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.

If true, this would be a big deal…

Read on

The Boeing 737 MAX story — and its implications

This morning’s Observer column:

Here’s a question. Well, two questions, actually. One: how could an aircraft manufacturer long celebrated for its commitment to engineering excellence produce an airliner with aerodynamic characteristics that made it unstable under some circumstances – and then release it with remedial computer software that appeared to make it difficult for pilots to take control? And two: why did the government regulator approve the plane – and then dither about grounding the model after it had crashed?

The aircraft in question is the Boeing 737 Max. The regulator is the US Federal Aviation Administration (FAA). The questions are urgent because this model has crashed twice – first in the Java Sea last October with the deaths of 189 people, and then in Ethiopia in March with the deaths of 157 people. Evidence retrieved from the second crash site suggested that the plane had been configured to dive before it came down. And the Ethiopian transport minister was quoted by Al-Jazeera on 4 April as saying that the crew “performed all the procedures repeatedly provided by the manufacturer but was not able to control the aircraft”. The FAA initially reaffirmed the airworthiness of the plane on 11 March but then grounded it on 13 March.

The full story of this catastrophe remains to be told, but we already know the outlines of it…

Read on