Wednesday 1 July, 2020

How things change: George Osborne and David Cameron sucking up to Xi Jinping in 2015

From Politico’s wonderful daily briefing, commenting on the conundrum for Brexiteers by China’s brutal crackdown in Hong Kong.

A long time ago, in a galaxy far, far away: “We have cemented Britain’s position as China’s best partner in the West,” a triumphant George Osborne beamed as he rolled out the red carpet for Chinese officials back in 2015. “We’ve got billions of pounds of Chinese investment creating thousands of jobs in Britain, and we’ve also now got a relationship where we can discuss the difficult issues.” Uh huh. And look — here’s David Cameron glugging a pint of warm ale with Xi Jinping on that same trip. You have to wonder how bad these clips will look another five years from now.

Actually, they looked pretty odious at the time too.


Coronavirus: What does Covid-19 do to the brain?

Paul Mylrea is a friend and valued colleague. In the early stages of the pandemic he was struck down with Covid and became desperately ill. But he survived and is now recovering from the two strokes he suffered as the virus rampaged through his body. The fact that Covid had laid him low shocked me because he’s one of the fittest people I know. Among other things, he was a senior diving instructor, swam every morning in the river Cam, and went everywhere on his Brompton bike. I remember thinking that “If the virus has got Paul, then nobody’s safe”.

This piece by Fergal Walsh, the BBC’s medical correspondent, about Paul’s struggle with the illness is a heartwarming story of medical skill and the body’s capacity for renewal. It is also confirmation of what a deadly and multifaceted pathogen Covid-19 is.


Imagine if the National Transportation Safety Board investigated America’s response to the coronavirus pandemic.

Absolutely fascinating Atlantic essay by James Fallows.

Here’s the gist:

Consider a thought experiment: What if the NTSB were brought in to look at the Trump administration’s handling of the pandemic? What would its investigation conclude? I’ll jump to the answer before laying out the background: This was a journey straight into a mountainside, with countless missed opportunities to turn away. A system was in place to save lives and contain disaster. The people in charge of the system could not be bothered to avoid the doomed course.

James Fallows is both a gifted writer and a keen pilot. This long essay is well worth reading in full.


The short-term decline in FB ad spending

Lots of big firms (Unilever, Coco-Cola, to name just two) have been making statements about how they will be not buying ads on Facebook in response to the BlackLivesMatter campaign. I’m afraid my instinctive reaction was to see this as empty virtue-signalling, and to privately predict that it would have little impact on Facebook’s bottom line in the longer run.

The New York Times has s story today which might appear to refute this. “Advertiser Exodus Snowballs as Facebook Struggles to Ease Concerns” is the headline.

Yet even as Facebook has labored to stanch the ad exodus, it is having little effect. Executives at ad agencies said that more of their clients were weighing whether to join the boycott, which now numbers more than 300 advertisers and is expected to grow. Pressure on top advertisers is coming from politicians, supermodels, actors and even Prince Harry and his wife, Meghan, they said. Internally, some Facebook employees said they were also using the boycott to push for change.

“Other companies are seeing this moment, and are stepping up proactively,” said Jonathan Greenblatt, chief executive of the Anti-Defamation League, citing recent efforts from Reddit, YouTube and Twitch taking down posts and content that promote hate speech across their sites. “If they can do it, and all of Facebook’s advertisers are asking them to do it, it doesn’t seem that hard to do.”

The push from advertisers has led Facebook’s business to a precarious point. While the social network has struggled with issues such as election interference and privacy in recent years, its juggernaut digital ads business has always powered forward. The Silicon Valley company has never faced a public backlash of this magnitude from its advertisers, whose spending accounts for more than 98 percent of its annual $70.7 billion in revenue.

I don’t buy that stuff about a “precarious point”. And data from Socialbakers doesn’t confirm it, as this chart suggests:

Note the sharp fall around the time of the protests — and then the rapid recover.

Big corporations engaging in virtue-signalling will make little difference to Facebook’s bottom line. The company probably makes most of its ad revenues from small and medium firms, for whom its targeting advertising system is perfect. And they aren’t going to stop advertising for ethical reasons.

The Economist agrees:

The damage to Facebook is likely to be small. Its $70bn ad business is built on 8m advertisers, most of them tiny companies with marketing budgets in the hundreds or thousands of dollars and often reliant on Facebook as an essential digital storefront. The 100 largest advertisers on the site account for less than 20% of total revenue, compared with 71% for the 100 largest advertisers on American network television. And so far only three of Facebook’s top 50 ad-buyers have joined the boycott.


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!


The White House’s ten principles for AI

Must be a spoof, surely? Something apparently serious emerging from the Trump administration. Ten principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI. It is also part of an ongoing effort to maintain US leadership in the field.

Here are the ten principles, for what they’re worth:

Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.

Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.

Scientific integrity and information quality. Policy decisions should be based on science.

Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.

Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.

Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.

Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.

Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.

Safety and security. Agencies should keep all data used by AI systems safe and secure.

Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

Kranzberg’s Law

As a critic of many of the ways that digital technology is currently being exploited by both corporations and governments, while also being a fervent believer in the positive affordances of the technology, I often find myself stuck in unproductive discussions in which I’m accused of being an incurable “pessimist”. I’m not: better descriptions of me are that I’m a recovering Utopian or a “worried optimist”.

Part of the problem is that the public discourse about this stuff tends to be Manichean: it lurches between evangelical enthusiasm and dystopian gloom. And eventually the discussion winds up with a consensus that “it all depends on how the technology is used” — which often leads to Melvin Kranzberg’s Six Laws of Technology — and particularly his First Law, which says that “Technology is neither good nor bad; nor is it neutral.” By which he meant that,

“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

Many of the current discussions revolve around various manifestations of AI, which means machine learning plus Big Data. At the moment image recognition is the topic du jour. The enthusiastic refrain usually involves citing dramatic instances of the technology’s potential for social good. A paradigmatic example is the collaboration between Google’s DeepMind subsidiary and Moorfields Eye Hospital to use machine learning to greatly improve the speed of analysis of anonymized retinal scans and automatically flag ones which warrant specialist investigation. This is a good example of how to use the technology to improve the quality and speed of an important healthcare service. For tech evangelists it is an irrefutable argument for the beneficence of the technology.

On the other hand, critics will often point to facial recognition as a powerful example for the perniciousness of machine-learning technology. One researcher has even likened it to plutonium. Criticisms tend to focus on its well-known weaknesses (false positives, racial or gender bias, for example), its hasty and ill-considered use by police forces and proprietors of shopping malls, the lack of effective legal regulation, and on its use by authoritarian or totalitarian regimes, particularly China.

Yet it is likely that even facial recognition has socially beneficial applications. One dramatic illustration is a project by an Indian child labour activist, Bhuwan Ribhu, who works for the Indian NGO Bachpan Bachao Andolan. He launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

The results were remarkable. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

This is clearly a good thing. But does it provide an overwhelming argument for India’s plan to construct one of the world’s largest facial-recognition systems with a unitary database accessible to police forces in 29 states and seven union territories?

I don’t think so. If one takes Kranzberg’s First Law seriously, then each proposed use of a powerful technology like this has to face serious scrutiny. The more important question to ask is the old Latin one: Cui Bono?. Who benefits? And who benefits the most? And who loses? What possible unintended consequences could the deployment have? (Recognising that some will, by definition, be unforseeable.) What’s the business model(s) of the corporations proposing to deploy it? And so on.

At the moment, however, all we mostly have is unasked questions, glib assurances and rash deployments.

Excavating AI

Fabulous essay by Kate Crawford and Trevor Paglen, uncovering the politics and biases embedded in the guge image databases that have been used for training machine learning software. Here’s how it begins:

You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning.

Something is wrong with this picture.

Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?

In short, how did we get here?

The authors begin with a deceptively simple question: What work do images do in AI systems? What are computers meant to recognize in an image and what is misrecognised or even completely invisible? They examine the methods used for introducing images into computer systems and look at “how taxonomies order the foundational concepts that will become intelligible to a computer system”. Then they turn to the question of labeling: “how do humans tell computers which words will relate to a given image? And what is at stake in the way AI systems use these labels to classify humans, including by race, gender, emotions, ability, sexuality, and personality?” And finally, they turn to examine the purposes that computer vision is meant to serve in our society and interrogate the judgments, choices, and consequences of providing computers with these capacities.

This is a really insightful and sobering essay, based on extensive research.

Some time ago Crawford and Paglen created an experimental website — ImageNet Roulette — which enabled anyone to upload their photograph and then pulled up from the ImageNet database how the person would be classified based on their photograph. The site is now offline, but the Guardian journalist Julia Carrie Wong wrote an interesting article about it recently in the course of which she investigated how it would classify/describe her from her Guardian byline photo. Here’s what she found.

Interesting ne c’est pas? Remember, this is the technology underpinning facial recognition.

Do read the whole thing.

Creative wealth and moral bankruptcy

Tomorrow’s Observer column, which for some reason is online today:

In the parallel moral universe known as the tech industry, the MIT media lab was Valhalla. “The engineers, designers, scientists and physicians who constitute the two dozen research groups housed there,” burbled the Atlantic in a profile of what it called the Idea Factory, “work in what may be the world’s most interesting, most hyper-interdisciplinary thinktank.” It has apparently been responsible for a host of groundbreaking innovations including “the technology behind the Kindle and Guitar Hero” (I am not making this up) and its researchers “end up pollinating other projects with insights and ideas, within a hive of serendipitous collaboration”.

That was written in 2011. In the last two weeks, we have discovered that some of this groundbreaking work was funded by Jeffrey Epstein, the financial wizard who took his own life rather than face prosecution for sex trafficking and other crimes. It should be pointed out that most of those researchers were entirely unaware of who was funding their work and some of them have been very upset by learning the truth. Their distress is intensified by the discovery that their ignorance was not accidental…

Read on

MORE danah boyd’s Acceptance Speech (link in the post below) is worth reading in this context, because she worked for a time at the Media Lab.

Why the tech industry has to change

From danah boyd’s acceptance speech on being given the 2019 Barlow/Pioneer award:

“Move fast and break things” is an abomination if your goal is to create a healthy society. Taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic.

The Great Reckoning is in front of us. How we respond to the calls for justice will shape the future of technology and society. We must hold accountable all who perpetuate, amplify, and enable hate, harm, and cruelty. But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem. We also owe it to them to actively seek to not build certain technologies because the human cost is too great.

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.

Most Facebook users are entirely unmoved by the Cambridge Analytica scandal

Sad (and predictable) but true — from Reuters:

NEW YORK/SAN FRANCISCO (Reuters) – Most of Facebook’s U.S. users have remained loyal to the social network despite revelations that a political consultancy collected information about millions of accounts without owners’ permission, a Reuters/Ipsos poll released on Sunday showed.

The Reuters/Ipsos poll adds to other indications that Facebook has so far suffered no ill effects from the episode, other than a public relations headache.

The national online poll, conducted April 26-30, found that about half of Facebook’s American users said they had not recently changed the amount that they used the site, and another quarter said they were using it more.

The remaining quarter said that they were using it less recently, had stopped using it or deleted their account.

That means that the people using Facebook less were roughly balanced by those using it more, with no clear net loss or gain in use.

In a way, all this does is confirm the fact that the vast majority of our fellow-citizens is deaf to ethical considerations. We’ve seen this for the best part of a century in the UK, where the vast majority of the population read (and pay for) ethically-dubious and politically-biased tabloid newspapers.

Sweeping the Net for… [take your pick]

From Ron Deibert:

The LGBTQ news website, “Gay Today,” is blocked in Bahrain; the website for Greenpeace International is blocked in the UAE; a matrimonial dating website is censored in Afghanistan; all of the World Health Organization’s website, including sub-pages about HIV/AIDS information, is blocked in Kuwait; an entire category of websites labeled “Sex Education,” are all censored in Sudan; in Yemen, an armed faction, the Houthis, orders the country’s main ISP to block regional and news websites.

What’s the common denominator linking these examples of Internet censorship? All of them were undertaken using technology provided by the Canadian company, Netsweeper, Inc.

In a new Citizen Lab report published today, entitled Planet Netsweeper, we map the global proliferation of Netsweeper’s Internet filtering technology to 30 countries. We then focus our analysis on 10 countries with significant human rights, insecurity, or public policy issues in which Netsweeper systems are deployed on large consumer ISPs: Afghanistan, Bahrain, India, Kuwait, Pakistan, Qatar, Somalia, Sudan, UAE, and Yemen. The research was done using a combination of network measurement and in-country testing methods. One method involved scanning every one of the billions of IP addresses on the Internet to search for signatures we have developed for Netsweeper installations (think of it like an x-ray of the Internet).

National-level Internet censorship is a growing norm worldwide. It is also a big business opportunity for companies like Netsweeper. Netsweeper’s Internet filtering service works by dynamically categorizing Internet content, and then providing customers with options to choose categories they wish to block (e.g., “Matrimonial” in Afghanistan and “Sex Education” in Sudan). Customers can also create their own custom lists or add websites to categories of their own choosing.

Netsweeper markets its services to a wide range of clients, from institutions like libraries to large ISPs that control national-level Internet connectivity. Our report highlights problems with the latter, and specifically the problems that arise when Internet filtering services are sold to ISPs in authoritarian regimes, or countries facing insecurity, conflict, human rights abuses, or corruption. In these cases, Netsweeper’s services can easily be abused to help facilitate draconian controls on the public sphere by stifling access to information and freedom of expression.

While there are a few categories that some might consider non-controversial—e.g., filtering of pornography and spam—there are others that definitely are not. For example, Netsweeper offers a filtering category called “Alternative Lifestyles,” in which it appears mostly legitimate LGBTQ content is targeted for convenient blocking. In our testing, we found this category was selected in the United Arab Emirates and was preventing Internet users from accessing the websites of the Gay & Lesbian Alliance Against Defamation (http://www.glaad.org) and the International Foundation for Gender Education (http://www.ifge.org), among many others. This kind of censorship, facilitated by Netsweeper technology, is part of a larger pattern of systemic discrimination, violence, and other human rights abuses against LGBTQ individuals in many parts of the world.

According to the United Nations Guiding Principles on Business and Human Rights, all companies have responsibilities to evaluate and take measures to mitigate the negative human rights impacts of their services on an ongoing basis. Despite many years of reporting and numerous questions from journalists and academics, Netsweeper still fails to take this obligation seriously.

A pivotal moment

The resounding ‘Yes” vote in the Irish Referendum on changing the Constitution to allow same-sex marriage is a pivotal moment in the history of my beloved homeland. And in the history of the world too, in a small way, because this is the first occasion in which legal equality has been conferred on non-heterosexuals by a popular vote.

My private expectation was that it would be a narrowly positive vote, and that it would be decided by the urban/rural divide, with the electorates of Dublin, Cork and Galway voting overwhelmingly ‘Yes’ and most of the rural constituencies voting ‘No’. In the event I was completely wrong: only one constituency (Roscommon-South Leitrim) went negative, and that by a small margin. There was still an urban/rural divide, but it was much narrower than I had expected.

Referendum_cartoon

Cartoon by Martyn Turner in today’s Irish Times.

What it means (and what the Archbishop of Dublin, Diarmuid Martin, conceded) is that Irish society has finally turned the corner towards secularity. What’s astonishing, in some ways, is that it took so long, especially given how long the revelations about the hypocrisy and criminality of the Catholic church over child abuse have been in the public domain. The idea that this decrepit, decaying institution could pretend to be a guide to morals (not to mention politics) was laughable for decades, but it seems that it is only now that its bluff has finally been called.

In one way, it was bound to happen, for demographic reasons — or what marketing consultants call “biological leakage”, i.e. the remorseless tendency of older people to pass away. But that doesn’t lessen the sense of wonder that it has finally happened. As the Irish Times put it in its First Leader,

“the time when bishops could instruct the Irish people on how to vote has long gone. What we may not have appreciated until now is that being a young, networked society has political consequences that can overturn the cynical conventional wisdom about voting behaviour, turnout and engagement.

This is the first Irish electoral event in which young people have taken the lead and determined the outcome and it has been a bracing, refreshing experience. It had been visible on the streets for weeks in the Yes badges that became ubiquitous during the campaign but it had its most potent and poignant expression in the multitude of young emigrants who came home to vote on Friday. Here, in a single gesture, was all the pathos of separation and longing; an expression of solidarity and belonging; and an enduring loyalty to the nation that had so signally failed them. The tweets from those returning to vote for marriage equality were at once inspiring and heartbreaking, testimony to our failure and their promise.”

The campaign was fascinating because it was, as Noel Whelan put it in the Times, “the most extensive civic society campaign ever seen in Irish politics”. In that sense, it reminded one of the campaign that propelled Obama to the White House in 2008. The people who masterminded it — Brian Sheehan and Gráinne Healey — have shown themselves to be consummate, canny strategists who crafted a campaign that was deliberately open and conversational rather than confrontational. (The chosen theme was: “I’m Voting Yes, Ask Me Why?”)

For me, it was especially cheering to see that a long, lonely and exceedingly courageous campaign by a fellow Joycean, Senator David Norris, had finally born fruit. Writing in the Times today, he recalled the long and winding road “from criminal to equal citizen”:

I have been privileged in my life to follow a remarkable trajectory from being defined into criminality, challenging the criminal law, losing in the High Court and Supreme Courts, finally winning out by a margin of one vote in Europe, seeing the criminal law changed and then starting to build on this basis for human and civil rights for gay people.

Fifty years ago my first boyfriend said to me outside a Wimpy Bar on Burgh Quay: “I love you David but I can’t marry you.” I still remember that all these years later.

Go forward 10 years when, after a debate on decriminalisation, the late Mona Bean O’Cribben remarked vehemently to me: “This isn’t just about decriminalisation. You have a homosexual agenda. You won’t be satisfied until you have homosexual marriage.” I turned to her and said: “What a wonderful idea, thank you very much madam, have you got any other suggestions?”

But there is another, intangible but real, aspect to this vote. One of the strangely positive side-effects of the ‘Celtic Tiger’ years — when the Irish economy zoomed from sensible economic development to casino property-development insanity — was that my fellow citizens experienced for the first time what it was like to be seen as successful by the rest of the world. It was suddenly, as some of them observed at the time, “cool to be Irish”. All of which meant that the bust and the subsequent economic collapse had an even harsher psychic impact: it turned out that we had been kidding ourselves; that we had, as Frank McDonald (the great Irish Times journalist) used to say, “lost the run of ourselves”.

But one of the most unexpected byproducts of Friday’s vote is that we can be genuinely proud of ourselves, and for a reason infinitely better than fuelling a crazed property boom: for once, we did the right thing. Not a bad day’s work.