Magritte-on-Thames
Quote of the Day
“You get all the French-fries the President can’t get to.”
- Al Gore, on being Vice President, 1994.
Musical alternative to the morning’s radio news
Paul Simon and Willie Nelson | Homeward Bound
Wonderful, truly wonderful.
Long Read of the Day
Peak Brain: The Metaphors of Neuroscience
Lovely disquisition on metaphor by Henry M. Cowles in the LA Review of Books. Sample:
The year I abandoned my Nikon, it popped up in a surprising place: cognitive science. That year, Joshua D. Greene published Moral Tribes, a work of philosophy that draws on neuroscience to explore why and how we make moral judgments. According to Greene, we make them using two different modes — not unlike a digital camera. “The human brain,” he writes, “is like a dual-mode camera with both automatic settings and a manual mode.” Sometimes, the analogy goes, you want to optimize your exposure time and shutter speed for specific light conditions — say, when faced with a big life decision. Other times, probably most of the time, tinkering with the settings is just too much of a hassle. You don’t want to build a pro-and-con list every time you order at a restaurant, just like you don’t want to adjust the aperture manually for each selfie you take.
Strange goings-on at Google
From this morning’s FT:
How did Google get itself into this mess? A company that is widely seen as having deeper capabilities in artificial intelligence than its main rivals, and which is under a microscope over how it wields its considerable economic and technological power, just had an acrimonious parting of the ways with its co-head of AI ethics.
Timnit Gebru left claiming she was fired over the suppression of an AI research paper. Jeff Dean, Google’s head of AI, said the paper wasn’t fit for publication and Dr Gebru resigned.
Except that she didn’t resign, it seems. She was fired — or, as they say in Silicon Valley without a hint or irony, “terminated”.
So let’s backtrack a bit. Dr Gebru was the joint-leader of Google’s ethical AI team, and is a prominent leader in AI ethics research. When she worked for Microsoft Research she was co-author of the groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of colour — a flaw which implies that its use can end up discriminating against them. She also co-founded the ‘Black in AI’ affinity group, and is a champion of diversity in the tech industry. The team she helped build at Google is believed to be one of the most diverse in AI (not that that’s saying much) and has produced critical work that often challenges mainstream AI practices.
Technology Review reports that
A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she coauthored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.
More detail is provided by an open letter authored by her supporters within Google and elsewhere. “Instead of being embraced by Google as an exceptionally talented and prolific contributor”, it says,
Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing. In an email to Dr. Gebru’s team on the evening of December 2, 2020, Google executives claimed that she had chosen to resign. This is false. In their direct correspondence with Dr. Gebru, these executives informed her that her termination was immediate, and pointed to an email she sent to a Google Brain diversity and inclusion mailing list as pretext.
In that email, it seems that Gebru pushed back against Google’s censorship of her (and her colleagues’) research, which focused on examining the environmental and ethical implications of large-scale AI language models (LLMs), which are used in many Google products. Gebru and her team worked for months on a paper that was under review at an academic conference. In late November, five weeks after the article had been internally reviewed and approved for publication through standard processes, senior Google executives made the decision to censor it, without warning or cause.
Gebru asked them to explain this decision and to take accountability for it, and to that responsibility for their “lacklustre” stand on discrimination and harassment in the workplace. Her supporters see her ‘termination’ as “an act of retaliation against Dr. Gebru, and it heralds danger for people working for ethical and just AI — especially Black people and People of Color — across Google.”
As an outsider it’s difficult to know what to make of this. MIT’s Technology Review obtained a copy of the article at the root of the matter — it has the glorious title of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” — from one of its co-authors, Emily Bender, a professor of computational linguistics at the University of Washington. However, she asked the magazine not to publish it in full because it was an early draft.
Despite this pre-condition, Tech Review is able to provide a pretty informative overview of the paper. On the basis of this summary, it’s hard to figure out what would lead senior Google executives to pull the plug on its publication.
Its aim, says Bender, was to survey the current landscape of research in natural language processing (NLP).
First of all, it takes a critical look at the environmental and financial costs of this kind of machine-leading research. It finds that the carbon footprint of the research has been ‘exploding’ since 2017 as models are fed more and more data from which to learn. This is interesting and important (I’ve even written about it myself) but there’s nothing special about the paper’s conclusions, except perhaps the implication that the costs of doing this stuff can only be borne by huge corporations while climate change hits poorer communities disproportionately.
Secondly, the massive linguistic data sets required inevitably contain many varieties of bias. (We knew that.) But they also capture only past language usage and are unable to capture ways in which language is changing as society changes. So, “An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.” Well, yes, but…
And then there are the opportunity costs of prioritising NLP research as against other things with potentially greater societal benefit. “Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them.
This research effort brings with it an opportunity cost, Gebru and her colleagues maintain. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully-curated data sets (and thus also use less energy).
Finally, there’s the risk that that because these new NLP models are so good at mimicking real human language that it’s easy to use them to fool people. There have been a few high-profile cases of this, such as the college student who churned out AI-generated self-help and productivity advice on a blog — which then went viral.
”The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.”
All of this is interesting but — as far as I can see — not exactly new. And yet it seems that, as Professor Bender puts it, “someone at Google decided this was harmful to their interests”.
And my question is: why? Is it just that the paper provides a lot of data which suggests that a core technology now used in many of Google’s products is, well, bad for the world? If that was indeed the motivation for the original dispute and decision, then it suggests that Google’s self-image as a technocratic force for societal good is now too important to be undermined by high-quality research which suggests otherwise. In which case, it suggests that there’s not that much difference between big tech companies and tobacco, oil and mining giants. They’re just corporations, doing what corporations always do.
Another, hopefully interesting, link
- One in Six Cadillac Dealers Opt to Close Instead of Selling Electric Cars. When told to get with the times or get out of the way, 150 out of 800 dealers reportedly took a cash buyout and walked away. Link They’ve figured out that there’s much less money in selling EVs which require very little follow-up care and maintenance. Once you’ve sold someone an EV, you won’t see them that often. No more expensive oil-changes and spark-plug changes.
This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if your decide that your inbox is full enough already!