Thursday 24 June, 2020

New customers fill seats at Barcelona opera house concert

To mark the re-opening of Barcelona’s Gran Teatre del Liceu opera house, the UceLi Quartet played a livestreamed performance of Puccini’s I Crisantemi (Chrysanthemums).

Who (or what) was in the audience?

Answer


Can you judge a book by its (back) cover?

Well, even if you can, it makes cover-art designers (justifiably) cross.

Waterstones, the (excellent) bookselling chain, has offered its apologies to book designers after some newly reopened branches began displaying books back to front so browsers could read the blurb without picking them up.

It was understandable but slightly “heartbreaking”, said designer Anna Morrison, who mainly designs covers for literary fiction, said she could see why it was happening, but it was still “a little sad”.

“There is a real art to a book cover. It can be a real labour of love and it is a bit disappointing to think our work is being turned round.”

She’s right. One of the joys of going into a bookshop is the blaze of colour and artwork on book covers that confronts you.

Link


Facebook faces trust crisis as ad boycott grows

It’s got the trust crisis, for sure. But so what?

This from Axios

After a handful of outdoor companies like North Face, REI and Patagonia said they would stop advertising on Facebook and Instagram last week, several other advertisers have joined the movement, including Ben & Jerry’s, Eileen Fisher, Eddie Bauer, Magnolia Pictures, Upwork, HigherRing, Dashlane, TalkSpace and Arc’teryx.

Heavyweights in the ad industry have also begun pressing marketers to pull their dollars.

On Tuesday, Marc Pritchard, chief brand officer at Procter & Gamble, one of the largest advertisers in the country, threatened to pull spending if platforms didn’t take “appropriate systemic action” to address hate speech.

In an email to clients obtained by the Wall Street Journal last Friday, 360i, a digital-ad agency owned by global ad holding group Dentsu Group Inc., urged its clients to support the ad boycott being advocated by civil rights groups.

I’m sorry to say this, but it looks to me just like virtue-signalling. Just like all the sudden corporate support for “our brilliant NHS” when the Coronavirus panic started in the UK. Facebook’s targeted advertising system is just too useful to companies to be dropped.


Wrongfully Accused by an Algorithm

This seems to be the first case of its kind, but it’s the canary in the mine as far as those of us who regard facial recognition technology as toxic.

On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.

An hour later, when he pulled into his driveway in a quiet subdivision in Farmington Hills, Mich., a police car pulled up behind, blocking him in. Two officers got out and handcuffed Mr. Williams on his front lawn, in front of his wife and two young daughters, who were distraught. The police wouldn’t say why he was being arrested, only showing him a piece of paper with his photo and the words “felony warrant” and “larceny.”

His wife, Melissa, asked where he was being taken. “Google it,” she recalls an officer replying.

The police drove Mr. Williams to a detention center. He had his mug shot, fingerprints and DNA taken, and was held overnight. Around noon on Friday, two detectives took him to an interrogation room and placed three pieces of paper on the table, face down.

“When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection. Shinola is an upscale boutique that sells watches, bicycles and leather goods in the trendy Midtown neighborhood of Detroit. Mr. Williams said he and his wife had checked it out when the store first opened in 2014.

The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.

“Is this you?” asked the detective.

The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.

“No, this is not me,” Mr. Williams said. “You think all black men look alike?”

Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.

Mr Williams had a cast-iron alibi, but the Detroit police couldn’t be bothered to check

He has since figured out what he was doing the evening the shoplifting occurred. He was driving home from work, and had posted a video to his private Instagram because a song he loved came on — 1983’s “We Are One,” by Maze and Frankie Beverly. The lyrics go:

I can’t understand
Why we treat each other in this way
Taking up time
With the silly silly games we play

Imagine a world where this stuff is everywhere, where you’re always in a police line-up.


The history of inquiries into race riots

A sobering (and depressing) piece by the Harvard historian Jill Lepore in the New Yorker.

TL;DR? (In case you’re busy, here’s the gist.)

In a 1977 study, “Commission Politics: The Processing of Racial Crisis in America,” Michael Lipsky and David J. Olson reported that, between 1917 and 1943, at least twenty-one commissions were appointed to investigate race riots, and, however sincerely their members might have been interested in structural change, none of the commissions led to any. The point of a race-riot commission, Lipsky and Olson argue, is for the government that appoints it to appear to be doing something, while actually doing nothing.

It’s the old, old story. What’s the betting the same thing will happen with Boris Johnson’s “cross-government inquiry into all aspects of racial inequality in the UK”?

Lepore’s is a fine piece, well worth reading in full. Thanks to David Vincent for alerting me to it.


Segway, the most hyped invention since the Macintosh, ends production

Very good report on what once looked like a great idea, but one that never caught on. Segways were very useful for TV cameramen and camerawomen covering golf tournaments, though.

My main regret is that I never managed to try one.


Quarantine diary — Day 96

Link


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!


Monday 17 February, 2020

Quote of the Day

In this election there are two sides. One side believes in the rule of law, the other doesn’t. Everything else, to be settled later, once the rule-of-law is re-established.

  • Dave Winer

_____________________________ 

My review of Andrew Marantz’s new book — Antisocial

On today’s Guardian. It’s a sobering read.

There has always been a dark undercurrent of white supremacism in some sectors of American culture. It was kept from public view for decades by the editorial gatekeepers of the old media ecosystem. But once the internet arrived, a sophisticated online culture of conspiracy theorists, racists and other malign discontents thrived in cyberspace. But it stayed below the radar until a fully paid-up conspiracy theorist won the Republican nomination. Trump’s candidacy and campaign had the effect of “mainstreaming” that which had previously been largely hidden from view. At which point, the innocent public began to see and experience what Marantz has closely observed, namely the remarkable capabilities of extremist “edgelords” to weaponise YouTube, Twitter and Facebook for destructive purposes.

One of the most depressing things about 2016 was the apparent inability of American journalism to deal with this pollution of the public sphere. In part, this was because they were crippled by their professional standards. It’s not always possible to be even-handed and honest. “The plain fact,” writes Marantz at one point, “was that the alt-right was a racist movement full of creeps and liars. If a newspaper’s house style didn’t allow its reporters to say so, then the house style was preventing its reporters from telling the truth.” Trump’s mastery of Twitter led the news agenda every day, faithfully followed by mainstream media, like beagles following a live trail. And his use of the “fake news” metaphor was masterly: a reminder of why, as Marantz points out, Lügenpresse – “lying press” – was also a favourite epithet of Joseph Goebbels.


Frank Ramsey

Frank Ramsey was a legend in Cambridge as one of the brightest young men of his time. He died tragically young (he was 26) in 1930, from an infection acquired from swimming in the river Cam. Now there’s a new biography of him by Cheryl Misak. Here’s part of her blurb about him:

The economist John Maynard Keynes identified Ramsey as a major talent when he was a mathematics student at Cambridge in the early 1920s. During his undergraduate days, Ramsey demolished Keynes’ theory of probability and C.H. Douglas’s social credit theory; made a valiant attempt at repairing Bertrand Russell’s Principia Mathematica; and translated Ludwig Wittgenstein’s Tractatus Logico-Philosophicus, and wrote a critique of the latter alongside a critical notice of it that still stands as one of the most challenging commentaries of that difficult and influential book.

Keynes, in an impressive show of administrative skill and sleight of hand, made the 21-year-old Ramsey a fellow of King’s College at a time when only someone who had studied there could be a fellow. (Ramsey had done his degree at Trinity).

Ramsey validated Keynes’ judgment. In 1926 he was the first to figure out how to define probability subjectively and invented the expected utility that underpins much of contemporary economics.

I’d never heard of Ramsey until I came on Keynes’s essay on him in his wonderful collection, Essays in Biography, published in 1933. (One of my favourite books, btw.) Given that Keynes himself was ferociously bright, the fact that he had such a high opinion of Ramsey was what made me sit up. Here’s an extract that conveys that:

Seeing all of Frank Ramsey’s logical essays published together, we can perceive quite clearly the direction which is mind was taking. It is a remarkable example of how the young can take up the story at the point to which the previous generation had brought it a little out of breath, and then proceed forward without taking more than about a week thoroughly to digest everything which had been done up to date, and to understand with apparent ease stuff which to anyone even 10 years older seemed hopelessly difficult. One almost has to believe that Ramsay in his nursery year near Magdalene1 was unconsciously absorbing from 1903 to 1914 everything which anyone may have been saying or writing from Trinity.

(Among the people in Trinity College at the time were Bertrand Russell, A.N. Whitehead and Ludwig Wittgenstein.)


The hacking of Jeff Bezos’s phone

Interesting (but — according to other forensic experts — incomplete) technical report into his the Amazon boss’s smartphone was hacked, presumably by someone working for the Saudi Crown Prince.

_____________________________________________ 

Where people have faith in their elections

The U.S. public’s confidence in elections is one of the worst of any wealthy democracy, according to a recently published Gallup poll. It found that a mere 40 percent of Americans have confidence in the honesty of their elections. As low as that figure is, distrust of elections is nothing new for the U.S. public.

The research found that a majority of Americans have had no confidence in the honesty of elections every year since 2012 with the share trusting the process at the ballot box sinking as low as 30 percent during the 2016 presidential campaign. Gallup stated that its 2019 data came at a time when eight U.S. intelligence agencies confirmed allegations of foreign interference in the 2016 presidential election and identified attempts to engage in similar activities during the midterms in 2018.

This chart shows how the U.S. compares to other developed OECD nations with the highest confidence scores recorded across Northern Europe and Finland, Norway and Sweden best-ranked.

Source

_________________________________________ 

David Spiegelhalter: Should We Trust Algorithms?

As the philosopher Onora O’Neill has said (O’Neill, 2013), organizations should not try to be trusted; rather they should aim to demonstrate trustworthiness, which requires honesty, competence, and reliability. This simple but powerful idea has been very influential: the revised Code of Practice for official statistics in the United Kingdom puts Trustworthiness as its first “pillar” (UK Statistics Authority, 2018).

It seems reasonable that, when confronted by an algorithm, we should expect trustworthy claims both:

  • about the system — what the developers say it can do, and how it has been evaluated, and

  • by the system — what it says about a specific case.

Terrific article


  1. Ramsey’s father was Master of Magdalene. 

Biased machines may be easier to fix than biased humans

This morning’s Observer column:

One of the things that really annoys AI researchers is how supposedly “intelligent” machines are judged by much higher standards than are humans. Take self-driving cars, they say. So far they’ve driven millions of miles with very few accidents, a tiny number of them fatal. Yet whenever an autonomous vehicle kills someone there’s a huge hoo-ha, while every year in the US nearly 40,000 people die in crashes involving conventional vehicles.

Likewise, the AI evangelists complain, everybody and his dog (this columnist included) is up in arms about algorithmic bias: the way in which automated decision-making systems embody the racial, gender and other prejudices implicit in the data sets on which they were trained. And yet society is apparently content to endure the astonishing irrationality and capriciousness of much human decision-making.

If you are a prisoner applying for parole in some jurisdictions, for example, you had better hope that the (human) judge has just eaten when your case comes up…

Read on

Bias in machine learning

Nice example from Daphne Keller of Google:

Another notion of bias, one that is highly relevant to my work, are cases in which an algorithm is latching onto something that is meaningless and could potentially give you very poor results. For example, imagine that you’re trying to predict fractures from X-ray images in data from multiple hospitals. If you’re not careful, the algorithm will learn to recognize which hospital generated the image. Some X-ray machines have different characteristics in the image they produce than other machines, and some hospitals have a much larger percentage of fractures than others. And so, you could actually learn to predict fractures pretty well on the data set that you were given simply by recognizing which hospital did the scan, without actually ever looking at the bone. The algorithm is doing something that appears to be good but is actually doing it for the wrong reasons. The causes are the same in the sense that these are all about how the algorithm latches onto things that it shouldn’t latch onto in making its prediction.

To recognize and address these situations, you have to make sure that you test the algorithm in a regime that is similar to how it will be used in the real world. So, if your machine-learning algorithm is one that is trained on the data from a given set of hospitals, and you will only use it in those same set of hospitals, then latching onto which hospital did the scan could well be a reasonable approach. It’s effectively letting the algorithm incorporate prior knowledge about the patient population in different hospitals. The problem really arises if you’re going to use that algorithm in the context of another hospital that wasn’t in your data set to begin with. Then, you’re asking the algorithm to use these biases that it learned on the hospitals that it trained on, on a hospital where the biases might be completely wrong.