It’s hard to believe but Apple has 800 people working just on the iPhone camera. Every so often, we get a glimpse of what they are doing. Basically, they’re using computation to enhance what can be obtained from a pretty small sensor. One sees this in the way HDR (High Dynamic Range) seems to be built-in to every iPhone X photograph. And now we’re seeing it in the way the camera can produce the kind of convincing bokeh(the blur produced in the out-of-focus parts of an image produced by a lens) that could hitherto only be got from particular kinds of optical lenses at wide apertures.
Matthew Panzarino, who is a professional photographer, has a useful review of the new iPhone XS in which he comments on this:
Unwilling to settle for a templatized bokeh that felt good and leave it that, the camera team went the extra mile and created an algorithmic model that contains virtual ‘characteristics’ of the iPhone XS’s lens. Just as a photographer might pick one lens or another for a particular effect, the camera team built out the bokeh model after testing a multitude of lenses from all of the classic camera systems.
Really striking, though, is an example Panzarino uses of how a post-hoc adjustable depth of focus can be really useful. He shows a photograph of himself with his young son perched on his shoulders.
And an adjustable depth of focus isn’t just good for blurring, it’s also good for un-blurring. This portrait mode selfie placed my son in the blurry zone because it focused on my face. Sure, I could turn the portrait mode off on an iPhone X and get everything sharp, but now I can choose to “add” him to the in-focus area while still leaving the background blurry. Super cool feature I think is going to get a lot of use.
Yep. Once, photography was all about optics. From now on it’ll increasingly be about computation.
This morning’s Observer column:
This is a month of anniversaries, of which two in particular stand out. One is that it’s 10 years since the seismic shock of the banking crisis – one of the consequences of which is the ongoing unravelling of the (neo)liberal democracy so beloved of western ruling elites. The other is that it’s 20 years since Google arrived on the scene.
Future historians will see our era divided into two ages: BG and AG – before and after Google. For web users in the 1990s search engines were a big deal, because as the network exploded, finding anything on it became increasingly difficult. Like many of my peers, I used AltaVista, a search tool developed by the then significant Digital Equipment Corporation (DEC), which was the best thing available at the time.
And then one day, word spread like wildfire online about a new search engine with a name bowdlerised from googol, the mathematical term for a huge number (10 to the power of 100). It was clean, fast and delivered results derived from conducting a kind of peer review of all the sites on the web. Once you tried it, you never went back to AltaVista.
Twenty years on, it’s still the same story…
Lovely, perceptive essay by Benedict Evans. Here’s how it opens…
When Nokia people looked at the first iPhone, they saw a not-great phone with some cool features that they were going to build too, being produced at a small fraction of the volumes they were selling. They shrugged. “No 3G, and just look at the camera!”
When many car company people look at a Tesla, they see a not-great car with some cool features that they’re going to build too, being produced at a small fraction of the volumes they’re selling. “Look at the fit and finish, and the panel gaps, and the tent!”
The Nokia people were terribly, terribly wrong. Are the car people wrong? We hear that a Tesla is ‘the new iPhone’ – what would that mean?
This is partly a question about Tesla, but it’s more interesting as a way to think about what happens when ‘software eats the world’ in general, and when tech moves into new industries. How do we think about whether something is disruptive? If it is, who exactly gets disrupted? And does that disruption that mean one company wins in the new world? Which one?
Well worth reading in full.
This morning’s Observer column:
Since its inception, it’s been the butt of jokes, a focus for academic ire and a victim of epistemological snobbery. I remember one moment when the vice-chancellor of a top university made a dismissive remark about Wikipedia, only to have a world-leading chemist in the audience icily retort that the pages on his particular arcane speciality were the most up-to-date summary currently available anywhere – because he wrote them. And this has been my experience; in specialist areas, Wikipedia pages are often curated by experts and are usually the best places to gain an informed and up-to-date overview.
Because Wikipedia is so vast and varied (in both range and quality), the controversies it engenders have traditionally been about its content and rarely about its modus operandi and its governance. Which is a pity, because in some ways these are the most significant aspects of the project. The political events of the last two years should have alerted us to the fact that Wikipedia had to invent a way of tackling the problem that now confronts us at a global level: how to get at some approximation to the truth…
“Services like Uber and online freelance markets like TaskRabbit were created to take advantage of an already independent work force; they are not creating it. Their technology is solving the business and consumer problems of an already insecure work world. Uber is a symptom, not a cause.”
Louis Hyman, an economic historian, writing in the New York Times
He has a new book coming out soon – Temp: How American Work, American Business, and the American Dream Became Temporary.
This morning’s Observer column:
Here’s the $64,000 question for our time: how did digital technologies go from being instruments for spreading democracy to tools for undermining it? Or, to put it a different way, how did social media go from empowering free speech to becoming a cornerstone of authoritarian power?
I ask this as a distressed, recovering techno-utopian. Like many engineers of my generation, I believed that the internet would be the most empowering and liberating technology since the invention of printing by moveable type. And once the web arrived, and anyone who could type could become a global publisher, it seemed to me that we were on the verge of something extraordinary. The old editorial gatekeepers of the pre-internet media world would lose their stranglehold on public discourse; human creativity would be unleashed; a million flowers would bloom in a newly enriched and democratised public sphere. In such a decentralised world, authoritarianism would find it hard to get a grip. A political leader such as Donald Trump would be unthinkable.
Naive? Sure. But I was in good company…
I’ve been pondering the problem of how to make a reasonably-successful organisation that’s been going for half a century realise that it may need to make some major shifts to address the challenges it will face in the next half-century. Headline: it ain’t easy. So then I started thinking about organisations that have managed the switch. Apple and the iPhone is one, I guess. But then I remembered that I’d once done an interview with Gordon Moore, the co-founder of Intel — a company which made that kind of radical switch when moving from making semiconductor memory to making processor chips. And that wasn’t easy either.
I then happened upon on a famous essay – “Seven Chapters of Strategic Wisdom” by Walter Kiechel III — which discusses the Intel experience. Here’s the relevant bit:
Just how difficult it is to pull this off, or to make any major change in strategic direction, is wonderfully captured in “Why Not Do It Ourselves?” the fifth chapter in Andrew S. Grove’s Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company, published in 1996. Grove tells how in 1985 he and Gordon Moore realized that Intel, the company they led, needed to get out of the business on which it was founded, making semiconductor memory chips, to concentrate instead on microprocessors. The reaction they encountered as they navigated their company through this “strategic inflection point” won’t surprise anyone who has tried to effect change in an organization. “How can you even think of doing this?” came the chorus from the heads of the company’s memory-chip operations. “Look at all the nifty stuff we’ve got in the pipeline” (even if we are losing our collective shirt to low-cost Japanese competitors).
Grove and Moore persisted, even though the effort entailed shutting down plants, laying off thousands of employees, and giving up what many thought of as the company’s birthright. Intel’s subsequent success in microprocessors, beginning with its celebrated “386” model, would soon make it the world’s largest semiconductor company. Read over the tale of what it took to get there if, in a delusional moment, you’re ever tempted to think that putting strategy into practice is easy, even a seemingly emergent strategy.
From The Inquirer:
The end result was the researchers had effectively found ways to hack and exploit WhatsApp.
“By decrypting the WhatsApp communication, we were able to see all the parameters that are actually sent between the mobile version of WhatsApp and the Web version. This allowed us to then be able to manipulate them and start looking for security issues,” the researchers explained.
As such, Check Point was able to then carry out three attacks against WhatsApp users, including changing the identity of a sender in a group chat even if they aren’t a member of said chat, changing a correspondent’s reply to effectively fake their response, and sending private messages to a person in a chat group but ensuring that when they respond the whole group sees the reply.
Basically, the attacks could enable malicious actors to sneak into group chats and manipulate conversations and cause communications havoc, and spread misinformation.
Hmmm… They had to do any awful lot of tedious stuff before they were able to pull off those tricks. On the other hand, this is what GCHQ and NSA do all the time, I guess.
Ben Evans is one of the most perceptive observers of the tech industry.
When I bought my first Toyota Prius hybrid many years ago I marvelled at the engineering ingenuity that went into making hybrid tech so seamless. And then realised that (a) Toyota would license the drivetrain to other manufacturers and (b) the technology would eventually be commoditised. So now almost every car manufacturer offers hybrid models even though few of them actually developed the drivetrain themselves. It’s Brian Arthur’s model of technological innovation at work.
The iPhone — multitouch — analogy is useful. Most smartphones are not iPhones, but most of the profits from smartphones are currently captured by Apple. The big question for Tesla is whether — when electric cars become mundane — it can hold onto Apple-scale margins. In that context, you could say that Nissan — with its Leaf — might be the Samsung of the electric car business.
From The Register this morning:
The latest version of TensorFlow can now be run on the Raspberry Pi.
“Thanks to a collaboration with the Raspberry Pi Foundation, we’re now happy to say that the latest 1.9 release of TensorFlow can be installed from pre-built binaries using Python’s pip package system,” according to a blog post written by Pete Warden, an engineer working on the TensorFlow team at Google.
It’s pretty easy to install if you’ve got a Raspberry Pi running Raspbian 9.0 and either Python 2.7 or anything newer than Python 3.4. After that it’s only a few simple lines of code, and you’re done.
Here’s a quick overview on how to install it, it also includes some troubleshooting advice just in case you run into some problems.