Friday 22 May, 2020

So what day is it, actually?

Seen in a tech company office the other day.


Nearly half of Twitter accounts tweeting about Coronavirus are probably bots

Interesting report from NPR.

Nearly half of the Twitter accounts spreading messages on the social media platform about the coronavirus pandemic are likely bots, researchers at Carnegie Mellon University said Wednesday.

Researchers culled through more than 200 million tweets discussing the virus since January and found that about 45% were sent by accounts that behave more like computerized robots than humans.

It is too early to say conclusively which individuals or groups are behind the bot accounts, but researchers said the tweets appeared aimed at sowing division in America.

This vividly reinforces the message in Phil Howard’s new bookLie machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives, (Yale, 2020) — which I’m currently reading.

Also it hardly needs saying (does it?) but nobody should think that what happens on Twitter provides a guide to what is actually going on in the real world. It’d be good if more journalists realised that.


Main Street in America: 62 Photos That Show How COVID-19 Changed the Look of Everyday Life

Lovely set of pics from an Esquire magazine project. Still photography reaches parts of the psyche that video can’t touch.

Lots of interesting photographs. Worth a look. But give it time.


Everybody knows…

A reader (to whom much thanks) was struck by my (corrected) reference to Joni Mitchell the other day and sent me a clip from Leonard Cohen’s song, Everybody Knows. This bit in particular strikes home:

Everybody knows that the dice are loaded
Everybody rolls with their fingers crossed
Everybody knows that the war is over
Everybody knows the good guys lost
Everybody knows the fight was fixed
The poor stay poor, the rich get rich
That’s how it goes
Everybody knows
Everybody knows that the boat is leaking
Everybody knows that the captain lied
Everybody got this broken feeling
Like their father or their dog just died


We need power-steering for the mind, not autonomous vehicles

Following on from yesterday’s discussion of humans being treated as ‘moral crumple zones’ for the errors of so-called autonomous systems, there’s an interesting article in today’s New York Times on Ben Schneiderman, a great computer scientist (and an expert on human-computer interaction), who has been campaigning for years to get the more fanatical wing of the AI industry to recognise that what humanity needs is not so much fully-autonomous systems as ones that augment human capabilities.

This is a a debate that goes back at least to the 1960s when the pioneers of networked computing like JCR Licklider and Douglas Engelbart argued that the purpose of computers is to augment human capabilities (provide “power-steering for the mind” is how someone once put it) rather than taking humans out of the loop. What else, for example, is Google search than a memory prosthesis for humanity? In other words an augmentation.

This clash of worldviews comes to a head in many fields now — employment, for example. There’s not much argument, I guess, about building machines to do work that is really dangerous or psychologically damaging. Think of bomb disposal, on the one hand, or mindlessly repetitive tasks that in the end sap the humanity out of workers and are very badly paid. These are areas where, if possible, humans should be taken out of the loop.

But autonomous vehicles — aka self-driving cars — represent a moment where the two mindsets really collide. Lots of corporations (Uber, for instance) can’t wait for the moment when they can dispense with those tiresome human drivers. At the moment, they are frustrated by two categories of obstacle.

  1. The first is a lack (still) of technological competence: the kit still isn’t up to the job of managing the complexity of edge cases — where is where the usefulness of humans as crumple zones comes in, because they act as ‘responsibility sponges’ for corporations.

  2. The second is the colossal infrastructural changes that society would have to make if autonomous vehicles were to become a reality. AI evangelists will say that these changes are orders of magnitude less than the changes that were made in order to accommodate the traditional automobile. But nobody has yet made an estimate of the costs to society of changing the infrastructure of cities to accommodate the technology. And of course these costs will be borne more by taxpayers rather than the corporations who profit from the cost-reductions implicit in not employing drivers. It’ll be the usual scenario: the privatisation of profits, and the socialisation of costs.

Into this debate steps Ben Schneiderman., a University of Maryland computer scientist who has for decades warned against blindly automating tasks with computers. He thinks that the tech industry’s vision of fully-automated cars is misguided and dangerous. Robots should collaborate with humans, he believes, rather than replace them.

Late last year, Dr. Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organizers of an industry conference on “Assured Autonomy” in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerized machines that are “reliable, safe and trustworthy.”

There should be the equivalent of a flight data recorder for every robot, Dr. Shneiderman argued.

I can see why the tech industry would like to get rid of human drivers. On balance, roads would be a lot safer. But there is an intermediate stage that is achievable and would greatly improve safety without imposing a lot of the social costs of accommodating fully autonomous vehicles. It’s an evolutionary path involving the steady accumulation of the driver-assist technologies that already exist.

I happen to like driving — at least some kinds of driving, anyway. I’ve been driving since 1971 and have — mercifully — never had a serious accident. But on the other hand, I’ve had a few near-misses where lack of attention on my part, or on the part of another driver, could have had serious consequences.

So what I’d like is far more technology-driven assistance. I’ve found cruise-control very helpful — especially for ensuring that I obey speed-limits. And sensors that ensure that when parking I don’t back into other vehicles. But I’d also like forward-facing radar that, in slow-moving traffic, would detect when I’m too close to a car in front and apply the brakes if necessary — and spot a fox running across the road on a dark rainy night. I’d like lane-assist tech that would spot when I’m wandering on a motorway, and all-round video cameras that would overcome the blind-spots in mirrors and a self-parking system. And so on. All of this kit already exists, and if widely deployed would make driving much safer and more enjoyable. None of it requires the massive breakthroughs that current autonomous systems require. No rocket science required. Just common sense.

The important thing to remember is that this isn’t just about cars, but about AI-powered automation generally. As the NYT piece points out, the choice between elimination or augmentation is going to become even more important when the world’s economies eventually emerge from the devastation of the pandemic and millions who have lost their jobs try to return to work. A growing number of them will find they are competing with or working side by side with machines. And under the combination of neoliberal obsessions about eliminating as much labour as possible, and punch-drunk acceptance of tech visionary narratives, the danger is that societies will plump for elimination, with all the dangers for democracy that that could imply.


A note from your University about its plans for the next semester

Dear Students, Faculty, and Staff —

After careful deliberation, we are pleased to report we can finally announce that we plan to re-open campus this fall. But with limitations. Unless we do not. Depending on guidance, which we have not yet received.

Please know that we eventually will all come together as a school community again. Possibly virtually. Probably on land. Maybe some students will be here? Perhaps the RAs can be let in to feed the lab rats?

We plan to follow the strictest recommended guidance from public health officials, except in any case where it might possibly limit our major athletic programs, which will proceed as usual…

From McSweeney’s


Quarantine diary — Day 62

Link


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!


Thursday 21 May, 2020

Quote of the Day

“They would like to have the people come off. I’d rather have the people stay [on the ship]. … I would rather because I like the numbers being where they are. I don’t need to have the numbers double because of one ship that was not our fault.”

  • Donald J. Trump, Acting President of the United States, March 4, while on a visit to the Centers for Disease Control, answering a question about whether passengers on the Grand Princess cruise ship should be allowed to disembark.

5G ‘protection’ in Glastonbury

Glastonbury is possibly the wackiest town in the UK. Maybe it’s something in the water supply. There’s a lovely post on the Quackometer blog about it.

The council published a report that called for an ending of 5G rollout. Several members of the working group that looked into the safety of 5G complained that the group had been taken over “by anti-5G activists and “spiritual healers”.

This is not surprising to anyone who has ever visited the town of Glastonbury. There is not a shop, pub, business or chip shop that has not been taken over by “spiritual healers” of one sort or another. You cannot walk down the High Street without being smothered in a fog of incense and patchouli. It is far easier to buy a dozen black candles and a pewter dragon than it is a pint of milk.

Science has no sanctuary in Glastonbury. Homeopaths, healers, hedge-witches and hippies all descend on the town to be at one with the Goddess.

There may be no science there, but there’s a lot of ‘technology’ — as the BBC Technology correspondent Rory Cellan-Jones discovered on a visit — after which he tweeted this:

Further down, there’s a delicious analysis of an electronic device to ‘neutralise radiation’. Taking it apart reveals its innards:

This sophisticated device consists of a switch, a 9-volt battery, a length of standard copper pipe with two endpieces, and an LED bulb.

Not clear how much it sells for, but my guess is £50.

I’m in the wrong business.


Farewell to Beyond the Beyond

This is the title of what is, IMHO, the best essay on blogging ever written. If that seems an extravagant claim, stay tuned. But first, some context.

Bruce Sterling is one of the founders of the cyberpunk movement in science fiction, along with William Gibson, Rudy Rucker, John Shirley, Lewis Shiner, and Pat Cadigan. In addition, he is one of the subgenre’s chief ideological promulgators. But for me he’s always been the consummate blogger. His Beyond the Beyond blog has been running on Wired since 2003, but now — after 17 glorious years — he’s just written a final post.

So, the blog is formally ending this month, May MMXX.

My weblog is a collateral victim of Covid19, which has become a great worldwide excuse to stop whatever you were doing.

You see, this is a WIRED blog — in fact, it is the first ever WIRED blog — and WIRED and other Conde’ Nast publications are facing a planetary crisis. Basically, they’ve got no revenue stream, since the business model for glossy mags is advertisements for events and consumer goods.

If there are no big events due to pandemic, and nobody’s shopping much, either, then it’s mighty hard to keep a magazine empire afloat in midair. Instead, you’ve gotta fire staffers, shut down software, hunt new business models, re-organize and remove loose ends. There is probably no looser-end in the entire WIRED domain than this weblog.

So, in this extensive and self-indulgent conclusion, I’d like to summarize what I think I’ve learned by messing with this weblog for seventeen years.

I’ve been a passionate blogger since the late-1990s. It seemed to me that blogs were the first sign that the Internet was a technology that could finally enable the realisation of Jurgen Habermas’s concept of the ‘public sphere’. It met the three criteria for such a sphere:

  • universal access — anybody could have access to the space;
  • rational discussion on any subject; and
  • disregard of rank or social status.

Initially, my blog was private. It was basically a simple website that I had created, with a very primitive layout. I regarded it as a kind of lab notebook — a place for jotting down ideas where I wouldn’t lose them. As it grew, I discovered that it became even more useful if I put a search engine on it. And then when Dave Winer came up with a blogging platform — Frontier — I switched to that and Memex 1.1 went public. It was named after Vannevar Bush’s concept of the ‘Memex’– a system for associative linking — which he first articulated in a paper in 1939 and eventually published in 1945, and which eventually led, via an indirect route, to Tim Berners-Lee’s concept of the World Wide Web. If you’re interested, the full story is told in my history of the Net.

And since then Memex 1.1 has been up and running.

I suppose one of the reasons why I like Bruce’s swansong is that his views on blogging resonate with mine — except that he articulates them much more clearly that I ever have. Over the years I’ve encountered puzzlement, suspicion, scepticism and occasionally ridicule for the dogged way I’ve persisted in an activity that many of my friends and colleagues consistently regarded as weird. My journalistic colleagues, in particular, were always bemused by Memex: but that was possibly because (at least until recently) journalists regarded anybody who wrote for no pay as clinically insane. In that, they were at one with Dr Johnson, who famously observed that “No man but a blockhead ever wrote except for money”.

Still, there we are.

Bruce’s post is worth reading in its entirety, but here are a few gems:

…on its origins…

When I first started the “Beyond the Beyond” blog, I was a monthly WIRED columnist and a contributing editor. Wired magazine wanted to explore the newfangled medium of weblogs, and asked me to give that a try. I was doing plenty of Internet research to support my monthly Wired column, so I was nothing loath. I figured I would simply stick my research notes online. How hard could that be?

That wouldn’t cost me much more effort than the duty of writing my column — or so I imagined. Maybe readers would derive some benefit from seeing some odd, tangential stuff that couldn’t fit within a magazine’s paper limits. The stuff that was — you know — less mainstream acceptable, more sci-fi-ish, more far-out and beyond-ish — more Sterlingian.

… on its general remit …

Unlike most WIRED blogs, my blog never had any “beat” — it didn’t cover any subject matter in particular. It wasn’t even “journalism,” but more of a novelist’s “commonplace book,” sometimes almost a designer mood board.

… on its lack of a business model…

It was extremely Sterlingesque in sensibility, but it wasn’t a “Bruce Sterling” celebrity blog, because there was scarcely any Bruce Sterling material in it. I didn’t sell my books on the blog, cultivate the fan-base, plug my literary cronies; no, none of that standard authorly stuff

… on why he blogged…

I keep a lot of paper notebooks in my writerly practice. I’m not a diarist, but I’ve been known to write long screeds for an audience of one, meaning myself. That unpaid, unseen writing work has been some critically important writing for me — although I commonly destroy it. You don’t have creative power over words unless you can delete them.

It’s the writerly act of organizing and assembling inchoate thought that seems to helps me. That’s what I did with this blog; if I blogged something for “Beyond the Beyond,” then I had tightened it, I had brightened it. I had summarized it in some medium outside my own head. Posting on the blog was a form of psychic relief, a stream of consciousness that had moved from my eyes to my fingertips; by blogging, I removed things from the fog of vague interest and I oriented them toward possible creative use.

… on not having an ideal reader…

Also, the ideal “Beyond the Beyond” reader was never any fan of mine, or even a steady reader of the blog itself. I envisioned him or her as some nameless, unlikely character who darted in orthogonally, saw a link to some odd phenomenon unheard-of to him or her, and then careened off at a new angle, having made that novelty part of his life. They didn’t have to read the byline, or admire the writer’s literary skill, or pony up any money for enlightenment or entertainment. Maybe they would discover some small yet glimmering birthday-candle to set their life alight.

Blogging is akin to stand-up comedy — it’s not coherent drama, it’s a stream of wisecracks. It’s also like street art — just sort of there, stuck in the by-way, begging attention, then crumbling rapidly.

Lovely stuff. Worth celebrating.


Moral Crumple Zones

Pathbreaking academic paper by Madeleine Clare Elish which addresses the problem of how to assign culpability and responsibility when AI systems cause harm. Example: when a ‘self-driving’ car hits and hills a pedestrian, is the ‘safety driver’ (the human supervisor sitting in the car but not at the controls at the time of the accident) the agent who gets prosecuted for manslaughter? (This is a real case, btw.).

Although published ages ago (2016) this is still a pathbreaking paper. In it Elish comes up with a striking new concept.

I articulate the concept of a moral crumple zone to describe how responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system.1Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.

While the crumple zone in a car is meant to protect the human driver, the moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator. What is unique about the concept of a moral crumple zone is that it highlights how structural features of a system and the media’s portrayal of accidents may inadvertently take advantage of human operators (and their tendency to become “liability sponges”) to fill the gaps in accountability that may arise in the context of new and complex systems.

It’s interesting how the invention of a pithy phrase can help to focus attention, attention and understanding.

Writing the other day in Wired, Tom Simonite picked up on Elish’s insight:

People may find it even harder to clearly see the functions and failings of more sophisticated AI systems that continually adapt to their surroundings and experiences. “What does it mean to understand what a system does if it is dynamic and learning and we can’t count on our previous knowledge?” Elish asks. As we interact with more AI systems, perhaps our own remarkable capacity for learning will help us develop a theory of machine mind, to intuit their motivations and behavior. Or perhaps the solution lies in the machines, not us. Engineers of future AI systems might need to spend as much time testing how well they play with humans as on adding to their electronic IQs.


Robotic Process Automation

Sounds boring, right? Actually for the average web user or business, it’s way more important than machine learning. RPA refers basically to software tools for automating the “long tail” of mundane tasks that are boring, repetitive, and prone to human error. Every office — indeed everyone who uses a computer for work — has tasks like this.

Mac users have lots of these tools available. I use Textexpander, for example, to create a small three-character code which, when activated, can type a signature at the foot of an email, or the top of a letterhead or, for that matter, an entire page of stored boilerplate text. For other tasks there are tools like IFTTT, Apple’s Shortcuts and other automation tools that are built into the OS X operating system.

Windows users, however, were not so lucky, which I guess is why the WinAutomation tools provided by a British company Softmotive were so popular. And guess what? Softmotive has just been bought by Microsoft. Smart move by Redmond.


Quarantine diary — Day 61

Link


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!


Thursday 19 March, 2020

If you might find it more useful to get this blog as a daily email, why not subscribe here? (It’s free, and there’s a 1-click unsubscribe). One email, in your inbox at 07:00 every morning.


England’s green and pleasant land

One part of the country that’s not currently under lockdown: the river Cam at Grantchester, photographed on Tuesday morning.

____________________________-

Apocalypse now

The Wall Street Journal reports from the Papa Giovanni XXIII Hospital, a large, modern medical facility in Bergamo, a prosperous Italian city that has been overwhelmed by the coronavirus disease:

Bergamo shows what happens when things go wrong. In normal times, the ambulance service at the Papa Giovanni hospital runs like a Swiss clock. Calls to 112, Europe’s equivalent of 911, are answered within 15 to 20 seconds. Ambulances from the hospital’s fleet of more than 200 are dispatched within 60 to 90 seconds. Two helicopters stand by at all times. Patients usually reach an operating room within 30 minutes, said Angelo Giupponi, who runs the emergency response operation: “We are fast, in peacetime.”

Now, people wait an hour on the phone to report heart attacks, Dr. Giupponi said, because all the lines are busy. Each day, his team fields 2,500 calls and brings 1,500 people to the hospital. “That’s not counting those the first responders visit but tell to stay home and call again if their condition worsens,” he said.

Ambulance staff weren’t trained for such a contagious virus. Many have become infected and their ambulances contaminated. A dispatcher died of the disease Saturday. Diego Bianco was in his mid-40s and had no prior illnesses.

“He never met patients. He only answered the phone. That shows you the contamination is everywhere,” a colleague said. Mr. Bianco’s co-workers sat Sunday at the operations center with masks on their faces and fear in their eyes…

This is why social-distancing has to be made to work.


MEOW

Our local supermarket announced that the first hour after opening this morning would be reserved for people who would have to ‘self-isolate’ from next weekend. I fall into that category because of my age, but people with particular medical conditions also fall into it. Think of it as voluntary house arrest! The supermarket was fairly busy with customers of retirement age. The atmosphere was cheery and civilised, with a vague feeling of wartime solidarity. In a way, I reflected, on discovering that all the milk had gone and further stocks were not expected until midday, that in a sense this is the moral equivalent of war.

And then I remembered that during the 1979 energy crisis in the US, the then president Jimmy Carter had used that phrase — I think in the context of making the US independent of oil imports from the Middle East. For Carter, the phrase was a way of signalling how important his campaign was. But of course his Republican opponents resisted it — and found a way of effectively ridiculing it by making an acronynm from the initial letters of each word: MEOW. And it worked.


AI is an ideology, not a technology

Nice essay in Wired by Jaron Lanier, arguing that, at its core, “artificial intelligence” is a perilous belief that fails to recognize the agency of humans. “The usual narrative goes like this”, he writes.

Without the constraints on data collection that liberal democracies impose and with the capacity to centrally direct greater resource allocation, the Chinese will outstrip the West. AI is hungry for more and more data, but the West insists on privacy. This is a luxury we cannot afford, it is said, as whichever world power achieves superhuman intelligence via AI first is likely to become dominant.

If you accept this narrative, the logic of the Chinese advantage is powerful. What if it’s wrong? Perhaps the West’s vulnerability stems not from our ideas about privacy, but from the idea of AI itself.

The central point of the essay is that “AI” is best understood as a political and social ideology rather than as a basket of algorithms. And at its core is the belief

that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. Given that any such replacement is a mirage, this ideology has strong resonances with other historical ideologies, such as technocracy and central-planning-based forms of socialism, which viewed as desirable or inevitable the replacement of most human judgement/agency with systems created by a small technical elite. It is thus not all that surprising that the Chinese Communist Party would find AI to be a welcome technological formulation of its own ideology.

Thoughtful piece. Worth reading in full.


Will the virus enable us to rediscover what the Internet is for?

The wonderful thing about the Net — so we naive techno-utopians used to think — was that it would liberate human creativity because it lowered the barriers to publication and self-expression. The most erudite articulation of this was probably Yochai Benkler’s wonderful The Wealth of Networks, a celebration of the potential of ‘peer production’ and user-generated content. We saw the technology was an enabling, democratising force — a ‘sit-up’ rather than a ‘lie-back’ medium. And we saw in its apparently inexorable rise the end of the era of the couch potato.

What we never expected was that a combination of capitalism and human nature would instead turn the network into million-channel TV, with billions of people passively consuming content created by media corporations: the ultimate lie-back medium. And indeed, if you look at the data traffic on the Net these days, you see the effects of that. According to Sandvine, a network equipment company, in 2019, for example, video accounted for 60.6 percent of total downstream volume worldwide, up 2.9 percentage points from 2018. Web traffic was the next biggest category, with a 13.1 percent share (down 3.8 points year over year), followed by gaming at 8.0 percent, social media at 6.1 percent and file sharing at 4.2 percent. The same report found that Google and its various apps (including YouTube and Android) accounted for 12 percent of overall internet traffic and that Facebook apps took 17 percent of downstream internet traffic in the Asia-Pacific region, as compared with 3 percent worldwide.

One interesting question raised by the COVID-19 crisis is whether people who find themselves isolated in their homes will discover affordances of the network of which they were hitherto unaware. Kevin Roose of the NYT explores this in “The Coronavirus Crisis Is Showing Us How to Live Online”. We’ve always hoped that our digital tools would create connections, not conflict, he says. We now have a chance to make it happen. After a week in self-isolation, he finds himself agreeably surprised:

Last weekend, in between trips to the grocery store, I checked up on some friends using Twitter D.M.s, traded home-cooking recipes on Instagram, and used WhatsApp to join a blockwide support group with my neighbors. I even put on my Oculus virtual reality headset, and spent a few hours playing poker in a V.R. casino with friendly strangers.

I expected my first week of social distancing to feel, well, distant. But I’ve been more connected than ever. My inboxes are full of invitations to digital events — Zoom art classes, Skype book clubs, Periscope jam sessions. Strangers and subject-matter experts are sharing relevant and timely information about the virus on social media, and organizing ways to help struggling people and small businesses. On my feeds, trolls are few and far between, and misinformation is quickly being fact-checked.

Well, well. Reporters should get out more — onto the free and open Internet rather than the walled gardens of social media.


AI for good is possible

This morning’s Observer column:

…As a consequence, a powerful technology with great potential for good is at the moment deployed mainly for privatised gain. In the process, it has been characterised by unregulated premature deployment, algorithmic bias, reinforcing inequality, undermining democratic processes and boosting covert surveillance to toxic levels. That it doesn’t have to be like this was vividly demonstrated last week with a report in the leading biological journal Cell of an extraordinary project, which harnessed machine learning in the public (as compared to the private) interest. The researchers used the technology to tackle the problem of bacterial resistance to conventional antibiotics – a problem that is rising dramatically worldwide, with predictions that, without a solution, resistant infections could kill 10 million people a year by 2050.

Read on

The real test of an AI machine? When it can admit to not knowing something

This morning’s Observer column on the EU’s plans for regulating AI and data:

Once you get beyond the mandatory euro-boosting rhetoric about how the EU’s “technological and industrial strengths”, “high-quality digital infrastructure” and “regulatory framework based on its fundamental values” will enable Europe to become “a global leader in innovation in the data economy and its applications”, the white paper seems quite sensible. But as for all documents dealing with how actually to deal with AI, it falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal wellbeing, accountability and that old favourite “transparency”. The only discernible omissions are motherhood and apple pie.

But this is par for the course with AI at the moment: the discourse is invariably three parts generalities, two parts virtue-signalling leavened with a smattering of pious hopes. It’s got to the point where one longs for some plain speaking and common sense.

And, as luck would have it, along it comes in the shape of Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society. He has spent his life trying to teach people how to understand statistical reasoning, and last month published a really helpful article in the Harvard Data Science Review on the question “Should we trust algorithms?”

Read on

The White House’s ten principles for AI

Must be a spoof, surely? Something apparently serious emerging from the Trump administration. Ten principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI. It is also part of an ongoing effort to maintain US leadership in the field.

Here are the ten principles, for what they’re worth:

Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.

Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.

Scientific integrity and information quality. Policy decisions should be based on science.

Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.

Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.

Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.

Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.

Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.

Safety and security. Agencies should keep all data used by AI systems safe and secure.

Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

Biased machines may be easier to fix than biased humans

This morning’s Observer column:

One of the things that really annoys AI researchers is how supposedly “intelligent” machines are judged by much higher standards than are humans. Take self-driving cars, they say. So far they’ve driven millions of miles with very few accidents, a tiny number of them fatal. Yet whenever an autonomous vehicle kills someone there’s a huge hoo-ha, while every year in the US nearly 40,000 people die in crashes involving conventional vehicles.

Likewise, the AI evangelists complain, everybody and his dog (this columnist included) is up in arms about algorithmic bias: the way in which automated decision-making systems embody the racial, gender and other prejudices implicit in the data sets on which they were trained. And yet society is apparently content to endure the astonishing irrationality and capriciousness of much human decision-making.

If you are a prisoner applying for parole in some jurisdictions, for example, you had better hope that the (human) judge has just eaten when your case comes up…

Read on

Bias in machine learning

Nice example from Daphne Keller of Google:

Another notion of bias, one that is highly relevant to my work, are cases in which an algorithm is latching onto something that is meaningless and could potentially give you very poor results. For example, imagine that you’re trying to predict fractures from X-ray images in data from multiple hospitals. If you’re not careful, the algorithm will learn to recognize which hospital generated the image. Some X-ray machines have different characteristics in the image they produce than other machines, and some hospitals have a much larger percentage of fractures than others. And so, you could actually learn to predict fractures pretty well on the data set that you were given simply by recognizing which hospital did the scan, without actually ever looking at the bone. The algorithm is doing something that appears to be good but is actually doing it for the wrong reasons. The causes are the same in the sense that these are all about how the algorithm latches onto things that it shouldn’t latch onto in making its prediction.

Addressing bias in algorithms is crucial, especially in domains like healthcare where accurate predictions are vital. One effective approach to recognizing and mitigating biases is to rigorously test the algorithm in scenarios similar to its real-world applications. Suppose a machine-learning algorithm is trained on data from specific hospitals to predict fractures from X-ray images. In this case, it may appropriately incorporate prior knowledge about patient populations in those hospitals, resulting in reliable predictions within that context. However, the challenge arises when the algorithm is intended to be used in different hospitals not present in the initial training data set. To avoid unintended biases, a robust evaluation process is essential, and the use of a mobile learning management system can prove beneficial. Such a system enables continuous monitoring and assessment of the algorithm’s performance across various hospital settings, ensuring it doesn’t latch onto irrelevant factors and provides accurate predictions based on genuine medical insights.

To recognize and address these situations, you have to make sure that you test the algorithm in a regime that is similar to how it will be used in the real world. So, if your machine-learning algorithm is one that is trained on the data from a given set of hospitals, and you will only use it in those same set of hospitals, then latching onto which hospital did the scan could well be a reasonable approach. It’s effectively letting the algorithm incorporate prior knowledge about the patient population in different hospitals. The problem really arises if you’re going to use that algorithm in the context of another hospital that wasn’t in your data set to begin with. Then, you’re asking the algorithm to use these biases that it learned on the hospitals that it trained on, on a hospital where the biases might be completely wrong.

Can the planet afford machine learning as well as Bitcoin?

This morning’s Observer column:

There is, alas, no such thing as a free lunch. This simple and obvious truth is invariably forgotten whenever irrational exuberance teams up with digital technology in the latest quest to “change the world”. A case in point was the bitcoin frenzy, where one could apparently become insanely rich by “mining” for the elusive coins. All you needed was to get a computer to solve a complicated mathematical puzzle and – lo! – you could earn one bitcoin, which at the height of the frenzy was worth $19,783.06. All you had to do was buy a mining kit (or three) from Amazon, plug it in and become part of the crypto future.

The only problem was that mining became progressively more difficult the closer we got to the maximum number of bitcoins set by the scheme and so more and more computing power was required. Which meant that increasing amounts of electrical power were needed to drive the kit. Exactly how much is difficult to calculate, but one estimate published in July by the Judge Business School at the University of Cambridge suggested that the global bitcoin network was then consuming more than seven gigwatts of electricity. Over a year, that’s equal to around 64 terawatt-hours (TWh), which is 8 TWh more than Switzerland uses annually. So each of those magical virtual coins turns out to have a heavy environmental footprint.

At the moment, much of the tech world is caught up in a new bout of irrational exuberance. This time, it’s about machine learning, another one of those magical technologies that “change the world”…

Read on

Kranzberg’s Law

As a critic of many of the ways that digital technology is currently being exploited by both corporations and governments, while also being a fervent believer in the positive affordances of the technology, I often find myself stuck in unproductive discussions in which I’m accused of being an incurable “pessimist”. I’m not: better descriptions of me are that I’m a recovering Utopian or a “worried optimist”.

Part of the problem is that the public discourse about this stuff tends to be Manichean: it lurches between evangelical enthusiasm and dystopian gloom. And eventually the discussion winds up with a consensus that “it all depends on how the technology is used” — which often leads to Melvin Kranzberg’s Six Laws of Technology — and particularly his First Law, which says that “Technology is neither good nor bad; nor is it neutral.” By which he meant that,

“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

Many of the current discussions revolve around various manifestations of AI, which means machine learning plus Big Data. At the moment image recognition is the topic du jour. The enthusiastic refrain usually involves citing dramatic instances of the technology’s potential for social good. A paradigmatic example is the collaboration between Google’s DeepMind subsidiary and Moorfields Eye Hospital to use machine learning to greatly improve the speed of analysis of anonymized retinal scans and automatically flag ones which warrant specialist investigation. This is a good example of how to use the technology to improve the quality and speed of an important healthcare service. For tech evangelists it is an irrefutable argument for the beneficence of the technology.

On the other hand, critics will often point to facial recognition as a powerful example for the perniciousness of machine-learning technology. One researcher has even likened it to plutonium. Criticisms tend to focus on its well-known weaknesses (false positives, racial or gender bias, for example), its hasty and ill-considered use by police forces and proprietors of shopping malls, the lack of effective legal regulation, and on its use by authoritarian or totalitarian regimes, particularly China.

Yet it is likely that even facial recognition has socially beneficial applications. One dramatic illustration is a project by an Indian child labour activist, Bhuwan Ribhu, who works for the Indian NGO Bachpan Bachao Andolan. He launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

The results were remarkable. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

This is clearly a good thing. But does it provide an overwhelming argument for India’s plan to construct one of the world’s largest facial-recognition systems with a unitary database accessible to police forces in 29 states and seven union territories?

I don’t think so. If one takes Kranzberg’s First Law seriously, then each proposed use of a powerful technology like this has to face serious scrutiny. The more important question to ask is the old Latin one: Cui Bono?. Who benefits? And who benefits the most? And who loses? What possible unintended consequences could the deployment have? (Recognising that some will, by definition, be unforseeable.) What’s the business model(s) of the corporations proposing to deploy it? And so on.

At the moment, however, all we mostly have is unasked questions, glib assurances and rash deployments.