AI for good is possible

This morning’s Observer column:

…As a consequence, a powerful technology with great potential for good is at the moment deployed mainly for privatised gain. In the process, it has been characterised by unregulated premature deployment, algorithmic bias, reinforcing inequality, undermining democratic processes and boosting covert surveillance to toxic levels. That it doesn’t have to be like this was vividly demonstrated last week with a report in the leading biological journal Cell of an extraordinary project, which harnessed machine learning in the public (as compared to the private) interest. The researchers used the technology to tackle the problem of bacterial resistance to conventional antibiotics – a problem that is rising dramatically worldwide, with predictions that, without a solution, resistant infections could kill 10 million people a year by 2050.

Read on

The real test of an AI machine? When it can admit to not knowing something

This morning’s Observer column on the EU’s plans for regulating AI and data:

Once you get beyond the mandatory euro-boosting rhetoric about how the EU’s “technological and industrial strengths”, “high-quality digital infrastructure” and “regulatory framework based on its fundamental values” will enable Europe to become “a global leader in innovation in the data economy and its applications”, the white paper seems quite sensible. But as for all documents dealing with how actually to deal with AI, it falls back on the conventional bromides about human agency and oversight, privacy and governance, diversity, non-discrimination and fairness, societal wellbeing, accountability and that old favourite “transparency”. The only discernible omissions are motherhood and apple pie.

But this is par for the course with AI at the moment: the discourse is invariably three parts generalities, two parts virtue-signalling leavened with a smattering of pious hopes. It’s got to the point where one longs for some plain speaking and common sense.

And, as luck would have it, along it comes in the shape of Sir David Spiegelhalter, an eminent Cambridge statistician and former president of the Royal Statistical Society. He has spent his life trying to teach people how to understand statistical reasoning, and last month published a really helpful article in the Harvard Data Science Review on the question “Should we trust algorithms?”

Read on

Sunday 2 February, 2020

The iPad: ten years on and still a work in progress

This morning’s Observer column

while the iPad I use today is significantly better and more functional than its 2010 predecessor, it’s still not a replacement for a laptop. Anything that involves multitasking – combining content from a variety of applications, for example – is clumsy and nonintuitive on the iPad, whereas it’s a breeze on a Mac. Given that user-interface design has traditionally been one of Apple’s great strengths, this clumsiness is strange and disappointing. Somewhere along the line, as veteran Apple-watcher John Gruber puts it, the designers made “profound conceptual mistakes in the iPad user interface, mistakes that need to be scrapped and replaced, not polished and refined”. Steve Jobs’s tablet may have come a long way, but it’s still a work in progress.

Do read the entire piece


A Republic if they could keep it. Looks like they couldn’t

As the farcical Senate Impeachment ‘trial’ just concluded what kept running through my mind was the story of what Benjamin Franklin said as he was leaving the Constitutional Convention of 1787 on the final day of deliberation. A woman asked him “Well Doctor what have we got, a republic or a monarchy?” To which Franklin famously replied, “A republic . . . if you can keep it.”

By acquitting Trump, the Senate seems to have confirmed the failure of that attempt. Trump is now effectively a monarch, floating above the law. So, one wonders, what happens next? As a habitual offender, he will undoubtedly commit more crimes. As a sitting President, it seems that he cannot be indicted by the normal processes of law enforcement. For him, Congress is the only constitutional authority that can punish him. But this Congress spectacularly refused to do so. So unless the Republicans lose control of the Senate in November, Trump will be entirely free of legal restraints. And supposing he loses (unlikely prospect at present), would he actually stand down? And in that eventuality, who would physically remove him from the White House?

_________________________ 

Presidential power and the Net

Further to the above thoughts about the untrammelled misuse of Presidential power, Jessica Rosenworcel, who is an FCC Commissioner, gave a sobering keynote address to the FCC’s ‘State of the Net’ conference in Washington on January 28.

She began by describing what’s currently going on in Kashmir, where the Indian government has cut off Internet connection for the 7 million people who live in that disputed territory. In one vivid passage, she described how Kashmiris are coping with this blackout:

Every morning like clockwork hundreds of passengers cram into a train out of the valley for a 70-mile journey to the nearest town with a connection. They are packed so tightly that they can barely move. If all goes well, they will be back before nightfall. Kashmiris have dubbed the train the “Internet Express.” It carries people hoping to renew driver’s licenses, apply for passports, fill out admission forms, check e-mail, and register for school exams. This is how they keep up with modern life, thanks to the shutdown.

Then Commissioner Rosenworcel turns to her audience:

Now if you are thinking this does not concern you because all of this is happening a world away, I understand. After all, the shutdown in the Kashmir Valley followed from the state invoking the Indian Telegraph Act of 1885, a law that dates to the British colonial era. Moreover, a few weeks ago Indian courts ruled that an indefinite internet shutdown is an abuse of power—although that decision alone does not restore all service. So you might think this is at some distance from what could happen in the United States. But you might want to think again.

Specifically, they might need to take a look at Section 706 of the Communications Act. The Section allows the President to shut down or take control of “any facility or station for wire communication” if he proclaims “that there exists a state or threat of war involving the United States.” With respect to wireless communications, suspending service is permitted not only in a “war or threat of war” but merely if there is a presidential proclamation of a “state of public peril” or simply a “disaster or other national emergency.” There is no requirement in the law for the President to provide any advance notice to Congress.

“This language”, says Rosenworcel,

is undeniably broad. The power it describes is virtually unchecked. So maybe some context will help. The changes to this section of the law about wire communications were made within a month after the attack on Pearl Harbor. It was passed at a time when Congress was laser focused on developing new ways to protect our safety and security.

Now of course Section 706 has not (yet) been applied to the Internet, and when the Act was amended after Pearl Harbor “wire communication” meant telephone calls or telegrams. But remember the bulk of US communications law dates back to 1934 and remains the framework for US communications infrastructure. And she points out that, in a 2010 report, the Senate concluded that Section 706 “gives the President the authority to take over wire communications in the United States and, if the President so chooses, shut a network down.”

So it remains true that if a sitting President wants to shut down the internet or selectively cut off a service, all it takes is an opinion from his Attorney General that Section 706 gives him the authority to do so.

That’s alarming. Because if you believe there are unspoken norms that would prevent us from using Section 706 this way, let me submit to you that past practice may no longer be the best guide for future behavior. Norms are being broken all the time in Washington and relying on them to cabin legal interpretation is not the best way to go.

Which rather puts the Impeachment case in a different light. Shutting down the US Internet would be unthinkable, wouldn’t it? Before nodding your head in vigorous agreement, ask yourself how many ‘unthinkable’ things have happened since Trump took office?


Sunday 26 January, 2020

What the Clearview AI story means

This morning’s Observer column:

Ultimately, the lesson of Clearview is that when a digital technology is developed, it rapidly becomes commodified. Once upon a time, this stuff was the province of big corporations. Now it can be exploited by small fry. And on a shoestring budget. One of the co-founders paid for server costs and basic expenses. Mr Ton-That lived on credit-card debt. And everyone worked from home. “Democracy dies in darkness” goes the motto of the Washington Post. “Privacy dies in a hacker’s bedroom” might now be more appropriate.

Read on

UPDATE A lawsuit — seeking class-action status — was filed this week in Illinois against Clearview AI, a New York-based startup that has scraped social media networks for people’s photos and created one of the biggest facial recognition databases in the world.


Privacy is a public good

Shoshana Zuboff in full voice:

”The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.”

Great OpEd piece.


The winding path


Why the media shouldn’t underestimate Joe Biden

Simple: Trump’s crowd don’t. They think he’s the real threat. (Which explains the behaviour that’s led to Trump’s Impeachment.) David Brooks has some sharp insights into why the chattering classes are off target About this.

It’s the 947th consecutive sign that we in the coastal chattering classes have not cured our insularity problem. It’s the 947th case in which we see that every second you spend on Twitter detracts from your knowledge of American politics, and that the only cure to this insularity disease is constant travel and interviewing, close attention to state and local data and raw abject humility about the fact that the attitudes and academic degrees that you think make you clever are actually the attitudes and academic degrees that separate you from the real texture of American life.

Also, the long and wide-ranging [NYT interview)(https://www.nytimes.com/interactive/2020/01/17/opinion/joe-biden-nytimes-interview.html) with him is full of interesting stuff — like that he thinks that Section 230 of the Communications Decency Act (that’s the get-out-of-gaol card for the tech companies) should be revoked. I particularly enjoyed this observation by Brooks: “ Jeremy Corbyn in Britain and Bernie Sanders here are a doctoral student’s idea of a working-class candidate, not an actual working person’s idea of one.”


Linkblog

*

Serial Killers: Moore’s Law and the parallelisation bubble

Cory Doctorow had a thoughtful reaction to Sunday’s Observer column, where I cited Nathan Myhrvold’s Four Laws of Software. “Reading it”, he writes,

made me realize that we were living through a parallel computation bubble. The period in which Moore’s Law had declined also overlapped with the period in which computing came to be dominated by a handful of applications that are famously parallel — applications that have seemed overhyped even by the standards of the tech industry: VR, cryptocurrency mining, and machine learning.

Now, all of these have other reasons to be frothy: machine learning is the ideal tool for empiricism-washing, through which unfair policies are presented as “evidence-based”; cryptocurrencies are just the thing if you’re a grifty oligarch looking to launder your money; and VR is a new frontier for the moribund, hyper-concentrated entertainment industry to conquer.

“Parallelizable problems become hammers in search of nails,” Cory continued in an email:

“If your problem can be decomposed into steps that can be computed independent of one another, we’ve got JUST the thing for you — so, please, tell me about all the problems you have that fit the bill?”

This is arguably part of why we’re living through a cryptocurrency and ML bubble: even though these aren’t solving our most pressing problems, they are solving our most TRACTABLE ones. We’re looking for our keys under the readily computable lamppost, IOW.

Which leads Cory (@doctorow) to this “half-formed thought”: the bubbles in VR, machine learning and cryptocurrency are partly explained by the decline in returns to Moore’s Law, which means that parallelizable problems are cheaper/easier to solve than linear ones.

And wondering what the counterfactual would have been like: if we had found a way of extending Moore’s Law indefinitely.

As Moore’s Law runs out of steam, it’ll be back to the future

This morning’s Observer column:

In a lecture in 1997, Nathan Myhrvold, who was once Bill Gates’s chief technology officer, set out his Four Laws of Software. 1: software is like a gas – it expands to fill its container. 2: software grows until it is limited by Moore’s law. 3: software growth makes Moore’s law possible – people buy new hardware because the software requires it. And, finally, 4: software is only limited by human ambition and expectation.

As Moore’s law reaches the end of its dominion, Myhrvold’s laws suggest that we basically have only two options. Either we moderate our ambitions or we go back to writing leaner, more efficient code. In other words, back to the future.

Read on

Cummings: long on ideas, short on strategy

My Observer OpEd piece about the world’s most senior technocrat:

When Dominic Cummings arrived in Downing Street, some of his new colleagues were puzzled by one of his mantras: “Get Brexit done, then Arpa”. Now, perhaps, they have some idea of what that meant. On 2 January, Cummings published on his blog the wackiest job proposals to emerge from a government since the emperor Caligula made his horse a consul. Dominic Cummings warned over civil service shake-up plan Read more

The ad took the form of a long post under the heading “We’re hiring data scientists, project managers, policy experts, assorted weirdos…”, included a reading list of arcane academic papers that applicants were expected to read and digest and declared that applications from “super-talented weirdos” would be especially welcome. They should assemble a one-page letter, attach a CV and send it to ideasfornumber10@gmail.com. (Yes, that’s @gmail.com.)

It was clear that nobody from HR was involved in composing this call for clever young things. Alerting applicants to the riskiness of employment by him, Cummings writes: “I’ll bin you within weeks if you don’t fit – don’t complain later because I made it clear now.”

The ad provoked predictable outrage and even the odd parody. The most interesting thing about it, though, is its revelations of what moves the man who is now the world’s most senior technocrat. The “Arpa” in his mantra, for example, is classic Cummings, because the Pentagon’s Advanced Research Projects Agency (now Darpa) is one of his inspirational models…

Read on

Raspberry Pi: a great British success story

This morning’s Observer column:

I bought my Pi from the Raspberry Pi store in Cambridge. Across the street (and one floor below) is the Apple store where I had earlier gone to buy a new keyboard for one of my Macs. The cost: £99. So for £15 more, I had a desktop computer perfectly adequate for most of the things I need to do for my work.

The Pi is one of the (few) great British technology success stories of the last decade: sales recently passed the 30m mark. But if you got your news from mainstream media you’d never know…

Read on

Sleepwalking into dystopia

This morning’s Observer column:

When the history of our time comes to be written, one of the things that will puzzle historians (assuming any have survived the climate cataclysm) is why we allowed ourselves to sleepwalk into dystopia. Ever since 9/11, it’s been clear that western democracies had embarked on a programme of comprehensive monitoring of their citizenry, usually with erratic and inadequate democratic oversight. But we only began to get a fuller picture of the extent of this surveillance when Edward Snowden broke cover in the summer of 2013.

For a time, the dramatic nature of the Snowden revelations focused public attention on the surveillance activities of the state. In consequence, we stopped thinking about what was going on in the private sector. The various scandals of 2016, and the role that network technology played in the political upheavals of that year, constituted a faint alarm call about what was happening, but in general our peaceful slumbers resumed: we went back to our smartphones and the tech giants continued their appropriation, exploitation and abuse of our personal data without hindrance. And this continued even though a host of academic studies and a powerful book by Shoshana Zuboff showed that, as the cybersecurity guru Bruce Schneier put it, “the business model of the internet is surveillance”.

The mystery is why so many of us are still apparently relaxed about what’s going on…

Read on

The 26 words that created the Internet we have today

This morning’s Observer column:

Stratton Oakmont sued Prodigy and the unidentified poster for defamation – and won. Prodigy argued that it couldn’t be held responsible for what anonymous users posted on its platform. The judge disagreed, arguing that the company was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in several ways and was thereby potentially liable for any and all defamatory material posted on its websites.

The case alarmed an Oregon congressman (now a US senator), Ronald Wyden, who accurately perceived it as a mortal threat to the growth of the internet. It would mean that every online hosting service would need to have lawyers crawling over its site, thereby slowing exploitation of the technology to a crawl. So with another congressman, Chris Cox, he inserted a short clause – Section 230 – into the Communications Decency Act, which was then incorporated in the sprawling 1996 Telecommunications Act. The section itself is short (about a thousand words) but the core of it is a single sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

That sentence laid the basis for everything that has followed. It constitutes, as the title of a recent book puts it, The Twenty-Six Words that Created the Internet. What it does is create a “liability shield” for online platforms…

Read on