Facial recognition firms should take a look in the mirror

This morning’s Observer column:

Last week, the UK Information Commissioner’s Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for “using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition”. The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems.

Since Clearview AI is not exactly a household name some background might be helpful. It’s a US outfit that has “scraped” (ie digitally collected) more than 20bn images of people’s faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service…

Read on

Thursday 23 July, 2020

Portrait of one of the latest arrivals in our garden. Shot — appropriately enough — using the ‘portrait’ setting on the iPhone 11, which kept telling me to “move further away”! As Umberto Eco wisely observed all those years ago, the Mac is a Catholic machine: only one true way to salvation.

Click on the image to see a larger version.


Quote of the Day

“I don’t want to be a hero. I want to teach.”

  • Claudia, a music teacher at a school in Massachusetts, who doesn’t want teachers to be put at risk then be lionized, as has been the case for healthcare workers.

Fawkes News: Image “Cloaking” for Personal Privacy

This is lovely. And it’s a student project too.

How do we protect ourselves against unauthorized third parties building facial recognition models to recognize us wherever we may go? Regulations can and will help restrict usage of machine learning by public companies, but will have negligible impact on private organizations, individuals, or even other nation states with similar goals.

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how their own images can be used to track them. At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Fawkes has been tested extensively and proven effective in a variety of environments, and shows 100% effectiveness against state of the art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++).

They’ve put together a terrific 12-minute video which explains the system. Well worth watching.

Link

Given that facial-recognition is such a toxic technology, it’s great to see its weaknesses being turned against it.

Thanks to Cory Doctorow for alerting me to it.


Where will everyone go?

I hate to say this, but compared with the upcoming climate crisis, the Coronavirus disruption is small beer. Nearly a decade ago the UK Cabinet office did a simulation study to try and figure out the effects of climate change on migration. The results, from my hazy memory, ran something like this. The model divided the global population into three groups: those who live in regions that will be relatively unaffected; those who were too poor to move, no matter how hot or inhospitable their locations became; and those who are able to move when things start to become intolerable where they are. The thing that most struck me about the scenarios was that most migrants will head for cities; and most of their target destinations are coastal cities which are at risk from sea-level rise.

But that was a crude modelling exercise. Now ProPublica and The New York Times Magazine, with support from the Pulitzer Center, have for the first time modeled how climate refugees might move across international borders. Today they publish their findings.

It’s a long, riveting and in some ways alarming read. And a story that’s beautifully told, making great use of the Web and photography as well as supercomputer modelling.

The modellers simulate five scenarios for climate-driven population movement in Southern and Central America.

1 An optimistic/reference scenario, in which climate impacts are rapidly reduced on a global scale and there is regional convergence toward higher levels of development across Central America and Mexico.

2 A pessimistic scenario, in which climate change impacts are on the high end of current plausible scenarios and significant challenges to socioeconomic development exist throughout the region, exacerbating the gap between Central America and the United States.

3 A more climate-friendly scenario, which pairs a less-extreme climate outcome with the same challenging socioeconomic future as the pessimistic scenario.

4 A more development-friendly scenario, which follows the pessimistic climate future but assumes a more inclusive development pathway in which regional economic growth occurs quickly.

5 A moderate scenario, in which socioeconomic development occurs rapidly throughout the region accompanied by a moderate level of climate change.

It’s an extraordinary piece of reporting and investigation. Read it if you can.


Escalation by Tweet

The department of War Studies at King’s College London has just produced an interesting report on the risks of conducting international diplomacy via Twitter, especially during crises.

The Executive Summary reads:

Social media has quickly become part of the geopolitical landscape, and international leaders and officials are increasingly taking to Twitter during crises. For US decision-makers, however, Twitter presents a bit of a paradox: on the one hand, tweets from government officials may help shape the American public narrative and provide greater insights into US decision-making to reduce misperception by foreign actors. On the other hand, tweets may increase misperception and sow confusion during crises, creating escalation incentives for an adversary.

To reconcile this paradox, we examine the useof Twitter by international leaders during crises in recent years, some of which involved nuclear-armed states. In so doing, we explore the changing nature of escalation, which now resembles a complex web more than a ladder, and examine specific escalation pathways involving social media.

Based on this analysis, we find that social media has the potential to be a disruptive technology and exacerbate tensions during crises. To reduce the risk of tweets contributing to escalation in a crisis, we recommend the US Department of Defense:

• lead an interagency effort to develop best practices on the use of social media during crises;

• encourage leaders and officials to refrain from tweeting during crises and instead rely on more traditional means of communication, such as press releases and official statements;

• explore how to build public resilience to disinformation campaigns and provocations via social media during crises, as the American public is asymmetrically vulnerable to these attacks; and

• improve understanding of how various international actors use social media.

Twitter, as a company, and alliances such as NATO, also have a role to play in limiting the negative impact of Twitter during crises. If these findings could be summarised in 280 characters or less, it would be: ‘To manage escalation during crises, stop tweeting.’

One of my favourite accounts on Twitter is @RealPressSecBot — a bot that takes every one of Trump’s tweets and immediately reformats them as an official White House press statement. Which, in effect, is what they are.


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!


Thursday 25 June, 2020

New customers fill seats at Barcelona opera house concert

To mark the re-opening of Barcelona’s Gran Teatre del Liceu opera house, the UceLi Quartet played a livestreamed performance of Puccini’s I Crisantemi (Chrysanthemums).

Who (or what) was in the audience?

Answer


Can you judge a book by its (back) cover?

Well, even if you can, it makes cover-art designers (justifiably) cross.

Waterstones, the (excellent) bookselling chain, has offered its apologies to book designers after some newly reopened branches began displaying books back to front so browsers could read the blurb without picking them up.

It was understandable but slightly “heartbreaking”, said designer Anna Morrison, who mainly designs covers for literary fiction, said she could see why it was happening, but it was still “a little sad”.

“There is a real art to a book cover. It can be a real labour of love and it is a bit disappointing to think our work is being turned round.”

She’s right. One of the joys of going into a bookshop is the blaze of colour and artwork on book covers that confronts you.

Link


Facebook faces trust crisis as ad boycott grows

It’s got the trust crisis, for sure. But so what?

This from Axios

After a handful of outdoor companies like North Face, REI and Patagonia said they would stop advertising on Facebook and Instagram last week, several other advertisers have joined the movement, including Ben & Jerry’s, Eileen Fisher, Eddie Bauer, Magnolia Pictures, Upwork, HigherRing, Dashlane, TalkSpace and Arc’teryx.

Heavyweights in the ad industry have also begun pressing marketers to pull their dollars.

On Tuesday, Marc Pritchard, chief brand officer at Procter & Gamble, one of the largest advertisers in the country, threatened to pull spending if platforms didn’t take “appropriate systemic action” to address hate speech.

In an email to clients obtained by the Wall Street Journal last Friday, 360i, a digital-ad agency owned by global ad holding group Dentsu Group Inc., urged its clients to support the ad boycott being advocated by civil rights groups.

I’m sorry to say this, but it looks to me just like virtue-signalling. Just like all the sudden corporate support for “our brilliant NHS” when the Coronavirus panic started in the UK. Facebook’s targeted advertising system is just too useful to companies to be dropped.


Wrongfully Accused by an Algorithm

This seems to be the first case of its kind, but it’s the canary in the mine as far as those of us who regard facial recognition technology as toxic.

On a Thursday afternoon in January, Robert Julian-Borchak Williams was in his office at an automotive supply company when he got a call from the Detroit Police Department telling him to come to the station to be arrested. He thought at first that it was a prank.

An hour later, when he pulled into his driveway in a quiet subdivision in Farmington Hills, Mich., a police car pulled up behind, blocking him in. Two officers got out and handcuffed Mr. Williams on his front lawn, in front of his wife and two young daughters, who were distraught. The police wouldn’t say why he was being arrested, only showing him a piece of paper with his photo and the words “felony warrant” and “larceny.”

His wife, Melissa, asked where he was being taken. “Google it,” she recalls an officer replying.

The police drove Mr. Williams to a detention center. He had his mug shot, fingerprints and DNA taken, and was held overnight. Around noon on Friday, two detectives took him to an interrogation room and placed three pieces of paper on the table, face down.

“When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection. Shinola is an upscale boutique that sells watches, bicycles and leather goods in the trendy Midtown neighborhood of Detroit. Mr. Williams said he and his wife had checked it out when the store first opened in 2014.

The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.

“Is this you?” asked the detective.

The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.

“No, this is not me,” Mr. Williams said. “You think all black men look alike?”

Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.

Mr Williams had a cast-iron alibi, but the Detroit police couldn’t be bothered to check

He has since figured out what he was doing the evening the shoplifting occurred. He was driving home from work, and had posted a video to his private Instagram because a song he loved came on — 1983’s “We Are One,” by Maze and Frankie Beverly. The lyrics go:

I can’t understand
Why we treat each other in this way
Taking up time
With the silly silly games we play

Imagine a world where this stuff is everywhere, where you’re always in a police line-up.


The history of inquiries into race riots

A sobering (and depressing) piece by the Harvard historian Jill Lepore in the New Yorker.

TL;DR? (In case you’re busy, here’s the gist.)

In a 1977 study, “Commission Politics: The Processing of Racial Crisis in America,” Michael Lipsky and David J. Olson reported that, between 1917 and 1943, at least twenty-one commissions were appointed to investigate race riots, and, however sincerely their members might have been interested in structural change, none of the commissions led to any. The point of a race-riot commission, Lipsky and Olson argue, is for the government that appoints it to appear to be doing something, while actually doing nothing.

It’s the old, old story. What’s the betting the same thing will happen with Boris Johnson’s “cross-government inquiry into all aspects of racial inequality in the UK”?

Lepore’s is a fine piece, well worth reading in full. Thanks to David Vincent for alerting me to it.


Segway, the most hyped invention since the Macintosh, ends production

Very good report on what once looked like a great idea, but one that never caught on. Segways were very useful for TV cameramen and camerawomen covering golf tournaments, though.

My main regret is that I never managed to try one.


Quarantine diary — Day 96

Link


This blog is also available as a daily email. If you think this might suit you better, why not subscribe? One email a day, delivered to your inbox at 7am UK time. It’s free, and there’s a one-click unsubscribe if you decide that your inbox is full enough already!


Tuesday January 21, 2020

Mark Knopfler musing about guitars

This is one of my favourite YouTube videos. Shows you what real mastery is like. Unshowy but unforgettable.


Clearview: the astonishing (but predictable) story

The New York Times had a great story the other day about a tiny firm called Clearview AI which had crafted a program to scrape images of people’s faces from across the Web — employment sites, news sites, educational sites, and social networks including Facebook, YouTube, Twitter, Instagram, etc. — and built a facial recognition algorithm that derived from academic papers. When a user uploads a photo of a face into Clearview’s system, it converts the face into a vector and then shows all the scraped photos stored in that vector’s neighborhood — along with the links to the sites from which those images came. Basically, you upload a photo and in many cases you get a name — often from a social-media posting.

Not surprisingly, police forces seem to like Clearview. One possible reason for that is that its service seems to be unique. Would-be imitators May have been deterred by the fact that the main social-media sites prohibit image-scraping, something that doesn’t seem to have bothered Clearview. Either that or they had a lawyer who knew about the LinkedIn case in which LinkedIn tried and failed to block and sue scrapers. The company lost the case and the judge said that not only could they not sue, but also that they’re not even allowed to try to block scraping by any technical means. As Ben Evans, observed, “Some people celebrated this as a triumph for free competition and the open web – welcome to the unintended consequences”. This case also confirms that facial-recognition technology is becoming a commodity.

Interestingly, Peter Thiel is an investor in, and a board member of, Clearview.


There’s a subreddit Reading Group for Alfred Marshall’s Principles of Economics (1920 edition)

Marks the centenary of the edition. Find the Reading Grouo here


UK government policy on electric vehicles is based on magical thinking

Take, for example, the UK pledge to move entirely to electric vehicles by 2050. I’ve been puzzled for a while about the electricity-generation capacity that would be needed to charge all those vehicles. And then I stumbled on a remarkable letter from a group of relevant scientific experts about the resource implications of such a commitment which was sent to the IPCC in June last year. And I realised that generation is only a smallish part of the story.

It’s well worth reading in full, but here are some of the highlights. To meet UK electric car targets for 2050 the UK would need to produce or acquire just under two times the current total annual world cobalt production, nearly the entire world production of neodymium, three quarters the world’s lithium production and at least half of the world’s copper production. Oh – and 20% increase in UK-generated electricity would be required to charge the current 252.5 billion miles to be driven by UK cars.

Like I said, magical thinking. Wishing doesn’t make something happen.


Could Mike Bloomberg beat Donald Trump?

Maybe. At least he’s rich enough. But be careful what you wish for. As Jack Shafer neatly points out, Bloomberg is a surveillance addict. A guy who amassed a $54 billion fortune by collecting petabyte upon petabyte of sortable data, would be very keen on enhancing a high-tech surveillance state that would collect personal data as aggressively and as expansively as he and his company do financial data.


Linkblog

Monday 20 January, 2020

Dennis Hopper was a great photographer. Who knew?

Not me, anyway. But last month Mark Rozzo had a fabulous piece in the New Yorker about a new collection of Hopper’s photographs edited by the photographer Michael Schmelling, to whom Marin Hopper (Dennis’s daughter) granted unlimited access to the archive. Hopper received a Nikon F as a gift on his twenty-fifth birthday, in May, 1961, from the actress Brooke Hayward, who would become his first wife. Her father, the agent and producer Leland Hayward, was “a camera nut”, and Brooke paid $351 for it. (Don’t you just love the fact-checked precision of the New Yorker — right down to that last buck!) “Dennis had the greatest eye of anyone I’ve ever known,” Hayward told Rozzo for a story he wrote last year about her marriage to Hopper. “He wore the camera around his neck all day long.” Some of the shots that illustrate the piece are really terrific. Result: one book sold to this blogger. It also reminded me that I have a Nikon F2 that badly needs servicing. Now where did I put it…?

Joe Biden really doesn’t like Silicon Valley

I’m beginning to warm to him. The NYT team did a really extensive on-the-record interview with him (transcript here). Here’s an excerpt from a passage where he’s been asked about his experience of dealing with Facebook about some stuff published on the platform containing false claims that he had blackmailed Ukrainian officials not to investigate his son.

Biden: I’ve never been a fan of Facebook, as you probably know. I’ve never been a big Zuckerberg fan. I think he’s a real problem. I think ——

Charlie Warzel (NYT guy): Can you elaborate?

JB:I can. He knows better. And you know, from my perspective, I’ve been in the view that not only should we be worrying about the concentration of power, we should be worried about the lack of privacy and them being exempt, which you’re not exempt. [The Times] can’t write something you know to be false and be exempt from being sued. But he can. The idea that it’s a tech company is that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms.

CW: That’s a pretty foundational law of the modern internet.

JB: That’s right. Exactly right. And it should be revoked. It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible. It’s totally irresponsible.

CW: If there’s proven harm that Facebook has done, should someone like Mark Zuckerberg be submitted to criminal penalties, perhaps?

JB: He should be submitted to civil liability and his company to civil liability, just like you would be here at The New York Times. Whether he engaged in something and amounted to collusion that in fact caused harm that would in fact be equal to a criminal offense, that’s a different issue. That’s possible. That’s possible it could happen. Zuckerberg finally took down those ads that Russia was running. All those bots about me. They’re no longer being run.

That’s interesting. Revoking Section 230 is the nuclear option in terms of regulation. It would reduce Facebook & Co to gibbering shadows of their former selves. And of course provoke hysteria about the First Amendment, even though Facebook has nothing to do with the Amendment, which is about government — not corporate — regulation of speech.

The EU is considering banning use of facial recognition technology in public spaces

According to Reuters, a White Paper by the European Commission says that new tough rules may have to be introduced to bolster existing regulations protecting Europeans’ privacy and data rights. During that ban, of between three to five years, “a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed.” Exceptions to the ban could be made for security projects as well as research and development.

Why are Apple & Google wanting you to use your phone less?

Nir Eyal (the guy who wrote the book on how to create addictive apps and subsequently seems to have had an attack of developer’s remorse) argues that it’s because they are trying to get ahead of users’ concern about addiction. He sees it as analogous to what happened with seat belts in cars.

In 1968, the Federal Government mandated that seat belts come equipped in all cars. However, nineteen years before any such regulation, American car makers started offering seat belts as a feature. The laws came well after car makers started offering seat belts because that’s what consumers wanted. Car makers who sold safer cars sold more.