Kranzberg’s Law

As a critic of many of the ways that digital technology is currently being exploited by both corporations and governments, while also being a fervent believer in the positive affordances of the technology, I often find myself stuck in unproductive discussions in which I’m accused of being an incurable “pessimist”. I’m not: better descriptions of me are that I’m a recovering Utopian or a “worried optimist”.

Part of the problem is that the public discourse about this stuff tends to be Manichean: it lurches between evangelical enthusiasm and dystopian gloom. And eventually the discussion winds up with a consensus that “it all depends on how the technology is used” — which often leads to Melvin Kranzberg’s Six Laws of Technology — and particularly his First Law, which says that “Technology is neither good nor bad; nor is it neutral.” By which he meant that,

“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

Many of the current discussions revolve around various manifestations of AI, which means machine learning plus Big Data. At the moment image recognition is the topic du jour. The enthusiastic refrain usually involves citing dramatic instances of the technology’s potential for social good. A paradigmatic example is the collaboration between Google’s DeepMind subsidiary and Moorfields Eye Hospital to use machine learning to greatly improve the speed of analysis of anonymized retinal scans and automatically flag ones which warrant specialist investigation. This is a good example of how to use the technology to improve the quality and speed of an important healthcare service. For tech evangelists it is an irrefutable argument for the beneficence of the technology.

On the other hand, critics will often point to facial recognition as a powerful example for the perniciousness of machine-learning technology. One researcher has even likened it to plutonium. Criticisms tend to focus on its well-known weaknesses (false positives, racial or gender bias, for example), its hasty and ill-considered use by police forces and proprietors of shopping malls, the lack of effective legal regulation, and on its use by authoritarian or totalitarian regimes, particularly China.

Yet it is likely that even facial recognition has socially beneficial applications. One dramatic illustration is a project by an Indian child labour activist, Bhuwan Ribhu, who works for the Indian NGO Bachpan Bachao Andolan. He launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

The results were remarkable. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

This is clearly a good thing. But does it provide an overwhelming argument for India’s plan to construct one of the world’s largest facial-recognition systems with a unitary database accessible to police forces in 29 states and seven union territories?

I don’t think so. If one takes Kranzberg’s First Law seriously, then each proposed use of a powerful technology like this has to face serious scrutiny. The more important question to ask is the old Latin one: Cui Bono?. Who benefits? And who benefits the most? And who loses? What possible unintended consequences could the deployment have? (Recognising that some will, by definition, be unforseeable.) What’s the business model(s) of the corporations proposing to deploy it? And so on.

At the moment, however, all we mostly have is unasked questions, glib assurances and rash deployments.

What if AI could write like Hemingway?

This morning’s Observer column:

Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.

If true, this would be a big deal…

Read on

USB-C solutions

Now that the MacBook and iPad that I use when travelling have only USB-C ports (and that the iPad Pro can now handle external drives) I needed a USB-stick that could do the trick. This 128GB one arrived today. It also has a USB 3.0 connector, so hooks up to older kit. Perfect for bringing presentations on the move.

Quantum supremacy?

This morning’s Observer column:

Something intriguing happened last week. A paper about quantum computing by a Google researcher making a startling claim appeared on a Nasa website – and then disappeared shortly afterwards. Conspiracy theorists immediately suspected that something sinister involving the National Security Agency was afoot. Spiritualists thought that it confirmed what they’ve always suspected about quantum phenomena. (It was, as one wag put it to me, a clear case of “Schrödinger’s Paper”.) Adherents of the cock-up theory of history (this columnist included) concluded that someone had just pushed the “publish” button prematurely, a suspicion apparently confirmed later by stories that the paper was intended for a major scientific journal before being published on the web.

Why was the elusive paper’s claim startling? It was because – according to the Financial Times – it asserted that a quantum computer built by Google could perform a calculation “in three minutes and 20 seconds that would take today’s most advanced classical computer … approximately 10,000 years”. As someone once said of the book of Genesis, this would be “important if true”. A more mischievous thought was: how would the researchers check that the quantum machine’s calculation was correct?

A quantum computer is one that harnesses phenomena from quantum physics, the study of the behaviour of subatomic particles, which is one of the most arcane specialisms known to humankind…

Read on

A real quantum leap?

This is from the FT (behind a paywall) so it came to me via Charles Arthur’s invaluable The Overspill:

A paper by Google’s researchers seen by the FT, that was briefly posted earlier this week on a Nasa website before being removed, claimed that their processor was able to perform a calculation in three minutes and 20 seconds that would take today’s most advanced classical computer, known as Summit, approximately 10,000 years.

The researchers said this meant the “quantum supremacy”, when quantum computers carry out calculations that had previously been impossible, had been achieved.

“This dramatic speed-up relative to all known classical algorithms provides an experimental realisation of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the authors wrote.

“To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor.”

The system can only perform a single, highly technical calculation, according to the researchers, and the use of quantum machines to solve practical problems is still years away.

But the Google researchers called it “a milestone towards full-scale quantum computing”. They also predicted that the power of quantum machines would expand at a “double exponential rate”, compared to the exponential rate of Moore’s Law, which has driven advances in silicon chips in the first era of computing.

Interesting that the article was withdrawn so precipitously. But really significant if true. After all, current encryption methods are all based on the proposition that some computations are beyond the reach of conventional machines.

We can write genetic code. But what about the (inevitable) bugs?

This morning’s Observer column:

A few days ago, on my way to a discussion in the exquisite little McCrum theatre, which is hidden away in the centre of Cambridge, I had to pass through the courtyard of the Eagle pub in Bene’t Street. As I did so, I suddenly remembered that this is the hostelry where, on 28 February 1953, Francis Crick, rushing in from the nearby Cavendish Lab, announced to astonished lunchers that he and James Watson had discovered the secret of life. (They had just unveiled their double-helix model of the DNA molecule to colleagues in the laboratory; there’s now a blue plaque on the wall marking the moment.)

As a graduate student in the late 1960s, I knew the pub well because it was where some of my geeky friends from the Computer Lab, then located in the centre of town, used to gather. We talked about a lot of things then, but one thing that never really crossed our minds was that there might be a connection between what Crick and Watson had done in 1953 and the software that many of us were struggling to write for our experiments or dissertations…

Read on

Why the iPhone upgrade cycle is lengthening

I’ve always lagged behind in the iPhone cycle. Until recently, I had an iPhone 6 — which I’d used for years. Because it was slowing up, I bought a used iPhone 7 Plus, largely because of its camera, and expect to run that for years. iPhones — like all smartphones — have reached the top part of the S curve, and we’re now at the point where improvements are incremental and relatively small.

So this advice from the NYT’s Brian X. Chen makes good sense:

Apple’s newest mobile operating system, iOS 13, will work only on iPhones from 2015 (the iPhone 6S) and later. So if you have an iPhone that is older than that, it is worth upgrading because once you can no longer update the operating system, some of your apps may stop working properly.

For those with younger iPhones, there are ways to get more mileage out of your current device. While the newest iPhones have superb battery life — several hours longer than the last generation — a fresh battery in your existing gadget costs only $50 to $70 and will greatly extend its life.

If you have the iPhone 6S from 2015 and the iPhone 7 from 2016, the iPhone 11s are speedier, with camera improvements and bigger displays. That makes an upgrade nice to have but not a must-have. But if you spent $1,000 on an iPhone X two years ago, then hold off. The iPhone 11s just aren’t enough of an innovation leap to warrant $700-plus on a new smartphone.

If you wait another year or two, you will most likely be rewarded with that jump forward. That might be an iPhone that works with fast 5G cellular networks, or a smartphone that can wirelessly charge an Apple Watch.

I don’t believe that stuff about charging the Watch, but otherwise this is spot on.

Traditional cameras are going the way of servers

This is from Om Malik’s blog:

Camera sales are continuing to falling off a cliff. The latest data from the Camera & Imaging Products Association (CIPA) shows them in a swoon befitting a Bollywood roadside Romeo. All four big camera brands — Sony, Fuji, Canon, and Nikon — are reposting rapid declines. And it is not just the point and shoot cameras whose sales are collapsing. We also see sales of higher-end DSLR cameras stall. And — wait for it — even mirrorless cameras, which were supposed to be a panacea for all that ails the camera business, are heading south.

Of course, by aggressively introducing newer and newer cameras with marginal improvements, companies like Fuji and Sony are finding that they might have created a headache. There is now a substantial aftermarket for casual photographers looking to save money on the companies’ generation-old products. Even those who can afford to buy the big 60-100 megapixel cameras are pausing. After all, doing so also involves buying a beefier computer. (Hello Mac Pro, cheese grater edition!)

I have seen this movie play out before — but in a different market…

Astute and, I think, accurate. Worth reading in full. The cultural implications of the shift of photography to smartphones are still not understood — though Om has been doing his best. See, for example, his 2016 New Yorker essay “In the future we will photograph everything and look at nothing”.

Where is the understanding we lose in machine learning?

This morning’s Observer column:

Fans of Douglas Adams’s Hitchhiker’s Guide to the Galaxy treasure the bit where a group of hyper-dimensional beings demand that a supercomputer tells them the secret to life, the universe and everything. The machine, which has been constructed specifically for this purpose, takes 7.5m years to compute the answer, which famously comes out as 42. The computer helpfully points out that the answer seems meaningless because the beings who instructed it never knew what the question was. And the name of the supercomputer? Why, Deep Thought, of course.

It’s years since I read Adams’s wonderful novel, but an article published in Nature last month brought it vividly to mind. The article was about the contemporary search for the secret to life and the role of a supercomputer in helping to answer it. The question is how to predict the three-dimensional structures of proteins from their amino-acid sequences. The computer is a machine called AlphaFold. And the company that created it? You guessed it – DeepMind…

Read on