Tuesday 15 September, 2020

Quote of the Day

“Where there is much to learn, there of necessity will be much arguing, much writing, many opinions; for opinions in good men is but knowledge in the making.”

  • Milton, Areopagatica

Yeah, but that was before social media :-(


Musical alternative to the morning’s radio news

Here Comes The Sun – Gabriella Quevedo

Link


How GitLab is transforming the future of online work

GitLab is a company which makes an application that enables developers to collaborate while writing and launching software. But it has no physical headquarters. Instead, it has more than 1,300 employees spread across 67 countries and nearly every time zone, all of them working either from home or (in nonpandemic times) in co-working spaces. So in contrast with most companies — which are trying to figure out how to manage remote working — it’s been doing so successfully for years.

FastCompany has an interesting piece on what the rest of us might learn from GitLab’s experience.

Research shows that talking about non-work-related things with colleagues facilitates trust, helps break down silos among departments, and makes employees more productive. At GitLab, all of this has always had to happen remotely.

The company takes these relaxed interactions so seriously that it has a specified protocol in its employee handbook, which is publicly available online in its entirety. If printed, it would span more than 7,100 pages.

The section on “Informal Communication in an All-Remote Environment” meticulously details more than three dozen ways coworkers can virtually connect beyond the basic Zoom call, from Donut Bot chats (where members of the #donut_be_strangers Slack channel are randomly paired) to Juice Box talks (for family members of employees to get to know one another). There are also international pizza parties, virtual scavenger hunts, and a shared “Team DJ Zoom Room.

But in addition to cultivating a vibrant culture of watercooler Zoom meetings over the past decade GitLab has also tackled a real problem in remote-working organisations: how to effectively induct new recruits into such a distributed organisational culture. It’s done this by setting rules for email and Slack to ensure that far-flung employees, working on different schedules around the globe, are looped in to essential messages.

To make this possible, the company has designed a workplace that makes other companies’ approach to transparency look positively opaque. At GitLab, meetings, memos, notes, and more are available to everyone within the company—and, for the most part, to everyone outside of it, too. Part of this embrace of transparency comes from the open-source ethos upon which GitLab was founded. (GitLab offers a free “community” version of its product, as well as a proprietary enterprise one.) But it’s also crucial to keeping employees in lockstep, in terms of product development and corporate culture.

GitLab raised $268 million last September at a $2.75 billion valuation and is rumored to be preparing for a direct public offering. (Its biggest competitor is GitHub, which Microsoft acquired for $7.5 billion in 2018.) As the company’s profile rises, its idiosyncratic workplace culture is attracting attention.

This is interesting. Lots of organisations could learn lessons from this. Maybe GitLab should spin out a consultancy business.


Life in the Wake of COVID-19

Lovely, moving photo essay

In April, José Collantes contracted the new coronavirus and quarantined himself in a hotel set up by the government in Santiago, Chile, away from his wife and young daughter. The 36-year-old Peruvian migrant showed only mild symptoms, and returned home in May, only to discover his wife, Silvia Cano, had also fallen ill. Silvia’s condition worsened quickly, and she was taken to a nearby hospital with pneumonia. Although they spoke on the phone, José and their 5-year-old daughter Kehity never saw Silvia again—she passed away in June, at the age of 37, due to complications from COVID-19. José found that he’d suddenly become a single parent, and felt haunted by questions about why Silvia had died and he survived.


AI ethics groups are repeating one of society’s classic mistakes

It’s funny to see how the tech industry suddenly discovered ethics, a subject about which the industry’s companies were almost as ignorant as tobacco companies or soft-drinks manufacturers. Now, ‘ethics’ and ‘oversight’ boards are springing up everywhere, most of which are patently pre-emptive attempts to ward off legal regulation, and are largely engaged in ‘ethics theatre’ — much like the security-theatre that goes on in airports worldwide.

This Tech Review essay by Abhishek Gupta and Victoria Heath argues that even serious-minded ethics initiatives suffer from critical geographical blind-spots.

AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts under way today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. Generally speaking, they do this by creating guidelines and principles for developers, funders, and regulators to follow. They might, for example, recommend routine internal audits or require protections for users’ personally identifiable information.

We believe these groups are well-intentioned and are doing worthwhile work. The AI community should, indeed, agree on a set of international definitions and concepts for ethical AI. But without more geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe.

“Those of us working in AI ethics will do more harm than good,”, Gupta and Heath argue,

if we allow the field’s lack of geographic diversity to define our own efforts. If we’re not careful, we could wind up codifying AI’s historic biases into guidelines that warp the technology for generations to come. We must start to prioritize voices from low- and middle-income countries (especially those in the “Global South”) and those from historically marginalized communities.

Advances in technology have often benefited the West while exacerbating economic inequality, political oppression, and environmental destruction elsewhere. Including non-Western countries in AI ethics is the best way to avoid repeating this pattern.

So: fewer ethics advisory jobs for Western philosophers, and more from experts from the poorer parts of the world. This will be news to the guys in Silicon Valley.


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!


Wednesday 1 July, 2020

How things change: George Osborne and David Cameron sucking up to Xi Jinping in 2015

From Politico’s wonderful daily briefing, commenting on the conundrum for Brexiteers by China’s brutal crackdown in Hong Kong.

A long time ago, in a galaxy far, far away: “We have cemented Britain’s position as China’s best partner in the West,” a triumphant George Osborne beamed as he rolled out the red carpet for Chinese officials back in 2015. “We’ve got billions of pounds of Chinese investment creating thousands of jobs in Britain, and we’ve also now got a relationship where we can discuss the difficult issues.” Uh huh. And look — here’s David Cameron glugging a pint of warm ale with Xi Jinping on that same trip. You have to wonder how bad these clips will look another five years from now.

Actually, they looked pretty odious at the time too.


Coronavirus: What does Covid-19 do to the brain?

Paul Mylrea is a friend and valued colleague. In the early stages of the pandemic he was struck down with Covid and became desperately ill. But he survived and is now recovering from the two strokes he suffered as the virus rampaged through his body. The fact that Covid had laid him low shocked me because he’s one of the fittest people I know. Among other things, he was a senior diving instructor, swam every morning in the river Cam, and went everywhere on his Brompton bike. I remember thinking that “If the virus has got Paul, then nobody’s safe”.

This piece by Fergal Walsh, the BBC’s medical correspondent, about Paul’s struggle with the illness is a heartwarming story of medical skill and the body’s capacity for renewal. It is also confirmation of what a deadly and multifaceted pathogen Covid-19 is.


Imagine if the National Transportation Safety Board investigated America’s response to the coronavirus pandemic.

Absolutely fascinating Atlantic essay by James Fallows.

Here’s the gist:

Consider a thought experiment: What if the NTSB were brought in to look at the Trump administration’s handling of the pandemic? What would its investigation conclude? I’ll jump to the answer before laying out the background: This was a journey straight into a mountainside, with countless missed opportunities to turn away. A system was in place to save lives and contain disaster. The people in charge of the system could not be bothered to avoid the doomed course.

James Fallows is both a gifted writer and a keen pilot. This long essay is well worth reading in full.


The short-term decline in FB ad spending

Lots of big firms (Unilever, Coco-Cola, to name just two) have been making statements about how they will be not buying ads on Facebook in response to the BlackLivesMatter campaign. I’m afraid my instinctive reaction was to see this as empty virtue-signalling, and to privately predict that it would have little impact on Facebook’s bottom line in the longer run.

The New York Times has s story today which might appear to refute this. “Advertiser Exodus Snowballs as Facebook Struggles to Ease Concerns” is the headline.

Yet even as Facebook has labored to stanch the ad exodus, it is having little effect. Executives at ad agencies said that more of their clients were weighing whether to join the boycott, which now numbers more than 300 advertisers and is expected to grow. Pressure on top advertisers is coming from politicians, supermodels, actors and even Prince Harry and his wife, Meghan, they said. Internally, some Facebook employees said they were also using the boycott to push for change.

“Other companies are seeing this moment, and are stepping up proactively,” said Jonathan Greenblatt, chief executive of the Anti-Defamation League, citing recent efforts from Reddit, YouTube and Twitch taking down posts and content that promote hate speech across their sites. “If they can do it, and all of Facebook’s advertisers are asking them to do it, it doesn’t seem that hard to do.”

The push from advertisers has led Facebook’s business to a precarious point. While the social network has struggled with issues such as election interference and privacy in recent years, its juggernaut digital ads business has always powered forward. The Silicon Valley company has never faced a public backlash of this magnitude from its advertisers, whose spending accounts for more than 98 percent of its annual $70.7 billion in revenue.

I don’t buy that stuff about a “precarious point”. And data from Socialbakers doesn’t confirm it, as this chart suggests:

Note the sharp fall around the time of the protests — and then the rapid recover.

Big corporations engaging in virtue-signalling will make little difference to Facebook’s bottom line. The company probably makes most of its ad revenues from small and medium firms, for whom its targeting advertising system is perfect. And they aren’t going to stop advertising for ethical reasons.

The Economist agrees:

The damage to Facebook is likely to be small. Its $70bn ad business is built on 8m advertisers, most of them tiny companies with marketing budgets in the hundreds or thousands of dollars and often reliant on Facebook as an essential digital storefront. The 100 largest advertisers on the site account for less than 20% of total revenue, compared with 71% for the 100 largest advertisers on American network television. And so far only three of Facebook’s top 50 ad-buyers have joined the boycott.


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!


The White House’s ten principles for AI

Must be a spoof, surely? Something apparently serious emerging from the Trump administration. Ten principles for government agencies to adhere to when proposing new AI regulations for the private sector. The move is the latest development of the American AI Initiative, launched via executive order by President Trump early last year to create a national strategy for AI. It is also part of an ongoing effort to maintain US leadership in the field.

Here are the ten principles, for what they’re worth:

Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.

Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.

Scientific integrity and information quality. Policy decisions should be based on science.

Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.

Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.

Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.

Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.

Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.

Safety and security. Agencies should keep all data used by AI systems safe and secure.

Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

Kranzberg’s Law

As a critic of many of the ways that digital technology is currently being exploited by both corporations and governments, while also being a fervent believer in the positive affordances of the technology, I often find myself stuck in unproductive discussions in which I’m accused of being an incurable “pessimist”. I’m not: better descriptions of me are that I’m a recovering Utopian or a “worried optimist”.

Part of the problem is that the public discourse about this stuff tends to be Manichean: it lurches between evangelical enthusiasm and dystopian gloom. And eventually the discussion winds up with a consensus that “it all depends on how the technology is used” — which often leads to Melvin Kranzberg’s Six Laws of Technology — and particularly his First Law, which says that “Technology is neither good nor bad; nor is it neutral.” By which he meant that,

“technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.”

Many of the current discussions revolve around various manifestations of AI, which means machine learning plus Big Data. At the moment image recognition is the topic du jour. The enthusiastic refrain usually involves citing dramatic instances of the technology’s potential for social good. A paradigmatic example is the collaboration between Google’s DeepMind subsidiary and Moorfields Eye Hospital to use machine learning to greatly improve the speed of analysis of anonymized retinal scans and automatically flag ones which warrant specialist investigation. This is a good example of how to use the technology to improve the quality and speed of an important healthcare service. For tech evangelists it is an irrefutable argument for the beneficence of the technology.

On the other hand, critics will often point to facial recognition as a powerful example for the perniciousness of machine-learning technology. One researcher has even likened it to plutonium. Criticisms tend to focus on its well-known weaknesses (false positives, racial or gender bias, for example), its hasty and ill-considered use by police forces and proprietors of shopping malls, the lack of effective legal regulation, and on its use by authoritarian or totalitarian regimes, particularly China.

Yet it is likely that even facial recognition has socially beneficial applications. One dramatic illustration is a project by an Indian child labour activist, Bhuwan Ribhu, who works for the Indian NGO Bachpan Bachao Andolan. He launched a pilot program 15 months prior to match a police database containing photos of all of India’s missing children with another one comprising shots of all the minors living in the country’s child care institutions.

The results were remarkable. “We were able to match 10,561 missing children with those living in institutions,” he told CNN. “They are currently in the process of being reunited with their families.” Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu.

This was made possible by facial recognition technology provided by New Delhi’s police. “There are over 300,000 missing children in India and over 100,000 living in institutions,” he explained. “We couldn’t possibly have matched them all manually.”

This is clearly a good thing. But does it provide an overwhelming argument for India’s plan to construct one of the world’s largest facial-recognition systems with a unitary database accessible to police forces in 29 states and seven union territories?

I don’t think so. If one takes Kranzberg’s First Law seriously, then each proposed use of a powerful technology like this has to face serious scrutiny. The more important question to ask is the old Latin one: Cui Bono?. Who benefits? And who benefits the most? And who loses? What possible unintended consequences could the deployment have? (Recognising that some will, by definition, be unforseeable.) What’s the business model(s) of the corporations proposing to deploy it? And so on.

At the moment, however, all we mostly have is unasked questions, glib assurances and rash deployments.

Excavating AI

Fabulous essay by Kate Crawford and Trevor Paglen, uncovering the politics and biases embedded in the guge image databases that have been used for training machine learning software. Here’s how it begins:

You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning.

Something is wrong with this picture.

Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?

In short, how did we get here?

The authors begin with a deceptively simple question: What work do images do in AI systems? What are computers meant to recognize in an image and what is misrecognised or even completely invisible? They examine the methods used for introducing images into computer systems and look at “how taxonomies order the foundational concepts that will become intelligible to a computer system”. Then they turn to the question of labeling: “how do humans tell computers which words will relate to a given image? And what is at stake in the way AI systems use these labels to classify humans, including by race, gender, emotions, ability, sexuality, and personality?” And finally, they turn to examine the purposes that computer vision is meant to serve in our society and interrogate the judgments, choices, and consequences of providing computers with these capacities.

This is a really insightful and sobering essay, based on extensive research.

Some time ago Crawford and Paglen created an experimental website — ImageNet Roulette — which enabled anyone to upload their photograph and then pulled up from the ImageNet database how the person would be classified based on their photograph. The site is now offline, but the Guardian journalist Julia Carrie Wong wrote an interesting article about it recently in the course of which she investigated how it would classify/describe her from her Guardian byline photo. Here’s what she found.

Interesting ne c’est pas? Remember, this is the technology underpinning facial recognition.

Do read the whole thing.

Creative wealth and moral bankruptcy

Tomorrow’s Observer column, which for some reason is online today:

In the parallel moral universe known as the tech industry, the MIT media lab was Valhalla. “The engineers, designers, scientists and physicians who constitute the two dozen research groups housed there,” burbled the Atlantic in a profile of what it called the Idea Factory, “work in what may be the world’s most interesting, most hyper-interdisciplinary thinktank.” It has apparently been responsible for a host of groundbreaking innovations including “the technology behind the Kindle and Guitar Hero” (I am not making this up) and its researchers “end up pollinating other projects with insights and ideas, within a hive of serendipitous collaboration”.

That was written in 2011. In the last two weeks, we have discovered that some of this groundbreaking work was funded by Jeffrey Epstein, the financial wizard who took his own life rather than face prosecution for sex trafficking and other crimes. It should be pointed out that most of those researchers were entirely unaware of who was funding their work and some of them have been very upset by learning the truth. Their distress is intensified by the discovery that their ignorance was not accidental…

Read on

MORE danah boyd’s Acceptance Speech (link in the post below) is worth reading in this context, because she worked for a time at the Media Lab.

Why the tech industry has to change

From danah boyd’s acceptance speech on being given the 2019 Barlow/Pioneer award:

“Move fast and break things” is an abomination if your goal is to create a healthy society. Taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic.

The Great Reckoning is in front of us. How we respond to the calls for justice will shape the future of technology and society. We must hold accountable all who perpetuate, amplify, and enable hate, harm, and cruelty. But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem. We also owe it to them to actively seek to not build certain technologies because the human cost is too great.

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.

Most Facebook users are entirely unmoved by the Cambridge Analytica scandal

Sad (and predictable) but true — from Reuters:

NEW YORK/SAN FRANCISCO (Reuters) – Most of Facebook’s U.S. users have remained loyal to the social network despite revelations that a political consultancy collected information about millions of accounts without owners’ permission, a Reuters/Ipsos poll released on Sunday showed.

The Reuters/Ipsos poll adds to other indications that Facebook has so far suffered no ill effects from the episode, other than a public relations headache.

The national online poll, conducted April 26-30, found that about half of Facebook’s American users said they had not recently changed the amount that they used the site, and another quarter said they were using it more.

The remaining quarter said that they were using it less recently, had stopped using it or deleted their account.

That means that the people using Facebook less were roughly balanced by those using it more, with no clear net loss or gain in use.

In a way, all this does is confirm the fact that the vast majority of our fellow-citizens is deaf to ethical considerations. We’ve seen this for the best part of a century in the UK, where the vast majority of the population read (and pay for) ethically-dubious and politically-biased tabloid newspapers.

Sweeping the Net for… [take your pick]

From Ron Deibert:

The LGBTQ news website, “Gay Today,” is blocked in Bahrain; the website for Greenpeace International is blocked in the UAE; a matrimonial dating website is censored in Afghanistan; all of the World Health Organization’s website, including sub-pages about HIV/AIDS information, is blocked in Kuwait; an entire category of websites labeled “Sex Education,” are all censored in Sudan; in Yemen, an armed faction, the Houthis, orders the country’s main ISP to block regional and news websites.

What’s the common denominator linking these examples of Internet censorship? All of them were undertaken using technology provided by the Canadian company, Netsweeper, Inc.

In a new Citizen Lab report published today, entitled Planet Netsweeper, we map the global proliferation of Netsweeper’s Internet filtering technology to 30 countries. We then focus our analysis on 10 countries with significant human rights, insecurity, or public policy issues in which Netsweeper systems are deployed on large consumer ISPs: Afghanistan, Bahrain, India, Kuwait, Pakistan, Qatar, Somalia, Sudan, UAE, and Yemen. The research was done using a combination of network measurement and in-country testing methods. One method involved scanning every one of the billions of IP addresses on the Internet to search for signatures we have developed for Netsweeper installations (think of it like an x-ray of the Internet).

National-level Internet censorship is a growing norm worldwide. It is also a big business opportunity for companies like Netsweeper. Netsweeper’s Internet filtering service works by dynamically categorizing Internet content, and then providing customers with options to choose categories they wish to block (e.g., “Matrimonial” in Afghanistan and “Sex Education” in Sudan). Customers can also create their own custom lists or add websites to categories of their own choosing.

Netsweeper markets its services to a wide range of clients, from institutions like libraries to large ISPs that control national-level Internet connectivity. Our report highlights problems with the latter, and specifically the problems that arise when Internet filtering services are sold to ISPs in authoritarian regimes, or countries facing insecurity, conflict, human rights abuses, or corruption. In these cases, Netsweeper’s services can easily be abused to help facilitate draconian controls on the public sphere by stifling access to information and freedom of expression.

While there are a few categories that some might consider non-controversial—e.g., filtering of pornography and spam—there are others that definitely are not. For example, Netsweeper offers a filtering category called “Alternative Lifestyles,” in which it appears mostly legitimate LGBTQ content is targeted for convenient blocking. In our testing, we found this category was selected in the United Arab Emirates and was preventing Internet users from accessing the websites of the Gay & Lesbian Alliance Against Defamation (http://www.glaad.org) and the International Foundation for Gender Education (http://www.ifge.org), among many others. This kind of censorship, facilitated by Netsweeper technology, is part of a larger pattern of systemic discrimination, violence, and other human rights abuses against LGBTQ individuals in many parts of the world.

According to the United Nations Guiding Principles on Business and Human Rights, all companies have responsibilities to evaluate and take measures to mitigate the negative human rights impacts of their services on an ongoing basis. Despite many years of reporting and numerous questions from journalists and academics, Netsweeper still fails to take this obligation seriously.