After the perfect picture, what?

Photography (in the technical rather than aesthetic sense) was once all about the laws of physics — wavelengths of different kinds of light, quality of lenses, refractive indices, coatings, scattering, colour rendition, depth of field, etc.) And initially, when mobile phones started to have cameras, those laws bore down heavily on them: they had plastic lenses and tiny sensors with poor resolution and light-gathering properties. So the pictures they produced might be useful as mementoes, but were of no practical use to anyone interested in the quality of images. And given the constraints of size and cost imposed by the economics of handset manufacture and marketing there seemed to be nothing much that anyone could do about that.

But this view applied only to hardware. The thing we overlooked is that smartphones were rather powerful handheld computers, and it was possible to write software that could augment — or compensate for — the physical limitations of the cameras.

I vividly remember the first time this occurred to me. It was a glorious late afternoon years ago in Provence and we were taking a friend on a drive round the spectacular Gorges du Verdon. About half-way round we stopped for a drink and stood contemplating the amazing views in the blazing sunlight. I reached for my (high-end) digital camera and fruitlessly struggled (by bracketing exposures) to take some photographs that could straddle the impossibly wide dynamic range of the lighting in the scene .

Then, almost as an afterthought, I took out my iPhone, realised that I had downloaded a HDR app, and so used that. The results were flawed in terms of colour balance, but it was clear that the software had been able to manage the dynamic range that had eluded my conventional camera. It was my introduction to what has become known as computational photography — a technology that has come on in leaps and bounds ever since that evening in Provence. Computational photography, as Benedict Evans puts it in a perceptive essay, ”Cameras that Understand”, means that

“as well as trying to make a better lens and sensor, which are subject to the rules of physics and the size of the phone, we use software (now, mostly, machine learning or ‘AI’) to try to get a better picture out of the raw data coming from the hardware. Hence, Apple launched ‘portrait mode’ on a phone with a dual-lens system but uses software to assemble that data into a single refocused image, and it now offers a version of this on a single-lens phone (as did Google when it copied this feature). In the same way, Google’s new Pixel phone has a ‘night sight’ capability that is all about software, not radically different hardware. The technical quality of the picture you see gets better because of new software as much as because of new hardware.” Most of how this is done is already — or soon will be — invisible to the user. Just as HDR used to involve launching a separate app, it’s now baked into many smartphone cameras, which do it automatically. Evans assumes that much the same will happen with the ‘portrait mode’ and ‘night sight’. All that stuff will be baked into later releases of the cameras.

“This will probably”, writes Evans,

also go several levels further in, as the camera goes better at working out what you’re actually taking a picture of. When you take a photo on a ski slope it will come out perfectly exposed and colour-balanced because the camera knows this is snow and adjusts correctly. Today, portrait mode is doing face detection as well as depth mapping to work out what to focus on; in the future, it will know which of the faces in the frame is your child and set the focus on them”. So we’re heading for a point at which one will have to work really hard to take a (technically) imperfect photo. Which leads one to ask: what’s next?

Evans thinks that a clue lies in the fact that people increasingly use their smartphone cameras as visual notebooks — taking pictures of recipes, conference schedules, train timetables, books and stuff we’d like to buy. Machine learning, he surmises, can do a lot with those kinds of images.

”If there’s a date in this picture, what might that mean? Does this look like a recipe? Is there a book in this photo and can we match it to an Amazon listing? Can we match the handbag to Net a Porter? And so you can imagine a suggestion from your phone: “do you want to add the date in this photo to your diary?” in much the same way that today email programs extract flights or meetings or contact details from emails.“

Apparently Google Lens is already doing something like this on Android phones.

Facebook’s targeting engine: still running smoothly on all cylinders

Well, well. Months — years — after the various experiments with Facebook’s targeting engine showing hos good it was at recommending unsavoury audiences, this latest report by the Los Angeles Times shows that it’s lost none of its imaginative acuity.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

Note also, that the formulaic Facebook response hasn’t changed either:

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Ah, yes. That ‘broader look’ again.

Word-processing vs. writing

While you sit at your computer now, the world seethes behind the letters as they appear on the screen. You can toggle to a football match, a parliamentary debate, a tsunami. A beep tells you that an e-mail has arrived. WhatsApp flashes on the screen. Interruption is constant but also desired. Or at least you’re conflicted about it. You realize that the people reading what you have written will also be interrupted. They are also sitting at screens, with smartphones in their pockets. They won’t be able to deal with long sentences, extended metaphors. They won’t be drawn into the enchantment of the text. So should you change the way you write accordingly? Have you already changed, unwittingly?

Or should you step back? Time to leave your computer and phone in one room, perhaps, and go and work silently on paper in another. To turn off the Wi-Fi for eight hours. Just as you once learned not to drink everything in the hotel minibar, not to eat too much at free buffets, now you have to cut down on communication. You have learned how compulsive you are, how fragile your identity, how important it is to cultivate a little distance. And your only hope is that others have learned the same lesson. Otherwise, your profession, as least as you thought of it, is finished.

Tim Parks

Facebook: the regulatory noose tightens

This is a big day. The DCMS Select Committee has published its scarifying report into Facebook’s sociopathic exploitation of its users’ data and its cavalier attitude towards both legislators and the law. As I write, it is reportedly negotiating with the Federal Trade Commission (FTC) — the US regulator — on the multi-billion-dollar fine the agency is likely to levy on the company for breaking its 2011 Consent Decree.

Couldn’t happen to nastier people.

In the meantime, for those who don’t have the time to read the 110-page DCMS report, Techcrunch has a rather impressive and helpful summary — provided you don’t mind the rather oppressive GDPR spiel that accompanies it.

The inescapable infrastructure of the networked world

This morning’s Observer column:

“Quitting smoking is easy,” said Mark Twain. “I’ve done it hundreds of times.” Much the same goes for smartphones. As increasing numbers of people begin to realise that they have a smartphone habit they begin to wonder if they should do something about the addiction. A few (a very few, in my experience) make the attempt, switching their phones off after work, say, and not rebooting them until the following morning. But almost invariably the dash for freedom fails and the chastened fugitive returns to the connected world.

The technophobic tendency to attribute this failure to lack of moral fibre should be resisted. It’s not easy to cut yourself off from a system that links you to friends, family and employer, all of whom expect you to be contactable and sometimes get upset when you’re not. There are powerful network effects in play here against which the individual addict is helpless. And while “just say no” may be a viable strategy in relation to some services (for example, Facebook), it is now a futile one in relation to the networked world generally. We’re long past the point of no return in our connected lives.

Most people don’t realise this. They imagine that if they decide to stop using Gmail or Microsoft Outlook or never buy another book from Amazon then they have liberated themselves from the tentacles of these giants. If that is indeed what they believe, then Kashmir Hill has news for them…

Read on

What makes a ‘tech’ company?

The Blackrock Blog points out that something strange is going on in the investment world.

MSCI and S&P are updating their Global Industry Classification Standards (GICS), a framework developed in 1999, to reflect major changes to the global economy and capital markets, particularly in technology.

Take Google, a company long synonymous with “tech” and internet software. Google parent Alphabet derives the bulk of its revenue from advertising, but also makes money from apps and hardware, and operates side ventures including Waymo, a unit that makes self-driving cars. Decisions about what makes a “tech” giant are not as simple as they once were.

The sector classification overhaul, set in motion last year, will begin in September and affect three of the 11 sector classifications that divide the global stock market. A newly created Communications Services sector will replace a grouping that is currently called Telecommunications Services. The new group will be populated by legacy Telecom stocks, as well as certain stocks from the Information Technology and Consumer Discretionary categories.

What does this mean?

Facebook and Alphabet will move from Information Technology to Communications Services in GICS-tracking indexes. Meanwhile, Netflix will move from Consumer Discretionary to Communications Services. None of what the media has dubbed the FANG stocks (Facebook, Amazon.com, Netflix and Google parent Alphabet) will be classified as Information Technology after the GICS changes, perhaps a surprise to those who think of internet innovation as “tech.” The same applies to China’s BAT stocks (Baidu, Alibaba Group and Tencent). All of these were Information Technology stocks before the changes; none will be after.

Or, in a tabular view:

This change is probably only significant for index funds, but still, it must rather dent the self-image of the ‘tech’ boys to be categorised as merely “communications services”!