Quote of the day

“When it’s impossible to distinguish facts from fraud, actual facts lose their power. Dissidents can end up putting their lives on the line to post a picture documenting wrongdoing only to be faced with an endless stream of deliberately misleading claims: that the picture was taken 10 years ago, that it’s from somewhere else, that it’s been doctored.

As we shift from an era when realistic fakes were expensive and hard to create to one where they’re cheap and easy, we will inevitably adjust our norms. In the past, it often made sense to believe something until it was debunked; in the future, for certain information or claims, it will start making sense to assume they are fake. Unless they are verified.”

Zeynep Tufecki

Why is Absher available on the UK App store?

NPR headline: “Apple, Google Criticized For Carrying App That Lets Saudi Men Track Their Wives”.

An app that allows Saudi men to track the whereabouts of their wives and daughters is available in the Apple and Google app stores in Saudi Arabia.

But the U.S. tech giants are getting blowback from human rights activists and lawmakers for carrying the app.

The app, called Absher, was created by the National Information Center, which according to a Saudi government website is a project of the Saudi Ministry of Interior.

The description of the app in both stores says that with Absher, “you can safely browse your profile or your family members, or [laborers] working for you, and perform a wide range of eServices online.”

In Saudi Arabia, women’s lives are highly restricted. For example, according to Human Rights Watch, women have always needed permission from a male guardian, usually a father or husband, to leave the country. In the past, paper forms were required prior to travel.

So why is this noxious app freely available on the Apple App store in the UK? (This morning I checked to see if it was — and it is.)

What Trump really wants

Nice NYT column by Paul Krugman, largely about why autocrats can deal with Trump but countries with an old-fashioned attachment to the rule of law cannot:

Trade conflict is essentially Trump’s personal vendetta — one that he is able to pursue because U.S. international trade law gives the president enormous discretion to impose tariffs on a variety of grounds. Predicting trade policy is therefore about figuring out what’s going on in one man’s mind.

Now, there are real reasons for the U.S. to be angry at China, and demand policy changes. Above all, China notoriously violates the spirit of international trade rules, de facto restricting foreign companies’ access to its market unless they hand over valuable technology. So you could make a case for U.S. pressure on China — coordinated with other advanced economies! — to stop that practice.

But there has been little evidence that Trump is interested in dealing with the real China problem. I was at a trade policy conference over the weekend where experts were asked what Trump really wants; the most popular answer was “tweetable deliveries.”

Lovely phrase that: tweetable deliveries.

Xi Jinping’s Little Red App

This morning’s Observer column:

We need to update Marx’s famous aphorism that “history repeats itself, the first time as tragedy, the second time as farce”. Version 2.0 reads: history repeats itself, the first time as tragedy, the second time as an app. Readers with long memories will remember Mao Zedong, the chairman (for life) of the Chinese Communist party who, in 1966, launched his Cultural Revolution to preserve Chinese communism by purging remnants of capitalist and traditional elements from Chinese society and reimposing his ideas (aka Maoism) as the dominant ideology within the party. One propaganda aid devised for this purpose was a little red book, printed in the hundreds of millions, entitled Quotations From Chairman Mao Tse-tung.

The “revolution” unleashed chaos in China: millions of citizens were persecuted, suffering outrageous abuses including public humiliation, arbitrary imprisonment, torture, hard labour, sustained harassment, seizure of property and worse…

Read on

After the perfect picture, what?

Photography (in the technical rather than aesthetic sense) was once all about the laws of physics — wavelengths of different kinds of light, quality of lenses, refractive indices, coatings, scattering, colour rendition, depth of field, etc.) And initially, when mobile phones started to have cameras, those laws bore down heavily on them: they had plastic lenses and tiny sensors with poor resolution and light-gathering properties. So the pictures they produced might be useful as mementoes, but were of no practical use to anyone interested in the quality of images. And given the constraints of size and cost imposed by the economics of handset manufacture and marketing there seemed to be nothing much that anyone could do about that.

But this view applied only to hardware. The thing we overlooked is that smartphones were rather powerful handheld computers, and it was possible to write software that could augment — or compensate for — the physical limitations of the cameras.

I vividly remember the first time this occurred to me. It was a glorious late afternoon years ago in Provence and we were taking a friend on a drive round the spectacular Gorges du Verdon. About half-way round we stopped for a drink and stood contemplating the amazing views in the blazing sunlight. I reached for my (high-end) digital camera and fruitlessly struggled (by bracketing exposures) to take some photographs that could straddle the impossibly wide dynamic range of the lighting in the scene .

Then, almost as an afterthought, I took out my iPhone, realised that I had downloaded a HDR app, and so used that. The results were flawed in terms of colour balance, but it was clear that the software had been able to manage the dynamic range that had eluded my conventional camera. It was my introduction to what has become known as computational photography — a technology that has come on in leaps and bounds ever since that evening in Provence. Computational photography, as Benedict Evans puts it in a perceptive essay, ”Cameras that Understand”, means that

“as well as trying to make a better lens and sensor, which are subject to the rules of physics and the size of the phone, we use software (now, mostly, machine learning or ‘AI’) to try to get a better picture out of the raw data coming from the hardware. Hence, Apple launched ‘portrait mode’ on a phone with a dual-lens system but uses software to assemble that data into a single refocused image, and it now offers a version of this on a single-lens phone (as did Google when it copied this feature). In the same way, Google’s new Pixel phone has a ‘night sight’ capability that is all about software, not radically different hardware. The technical quality of the picture you see gets better because of new software as much as because of new hardware.” Most of how this is done is already — or soon will be — invisible to the user. Just as HDR used to involve launching a separate app, it’s now baked into many smartphone cameras, which do it automatically. Evans assumes that much the same will happen with the ‘portrait mode’ and ‘night sight’. All that stuff will be baked into later releases of the cameras.

“This will probably”, writes Evans,

also go several levels further in, as the camera goes better at working out what you’re actually taking a picture of. When you take a photo on a ski slope it will come out perfectly exposed and colour-balanced because the camera knows this is snow and adjusts correctly. Today, portrait mode is doing face detection as well as depth mapping to work out what to focus on; in the future, it will know which of the faces in the frame is your child and set the focus on them”. So we’re heading for a point at which one will have to work really hard to take a (technically) imperfect photo. Which leads one to ask: what’s next?

Evans thinks that a clue lies in the fact that people increasingly use their smartphone cameras as visual notebooks — taking pictures of recipes, conference schedules, train timetables, books and stuff we’d like to buy. Machine learning, he surmises, can do a lot with those kinds of images.

”If there’s a date in this picture, what might that mean? Does this look like a recipe? Is there a book in this photo and can we match it to an Amazon listing? Can we match the handbag to Net a Porter? And so you can imagine a suggestion from your phone: “do you want to add the date in this photo to your diary?” in much the same way that today email programs extract flights or meetings or contact details from emails.“

Apparently Google Lens is already doing something like this on Android phones.

Facebook’s targeting engine: still running smoothly on all cylinders

Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

Note also, that the formulaic Facebook response hasn’t changed either:

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Ah, yes. That ‘broader look’ again.