I’ve been pondering the problem of how to make a reasonably-successful organisation that’s been going for half a century realise that it may need to make some major shifts to address the challenges it will face in the next half-century. Headline: it ain’t easy. So then I started thinking about organisations that have managed the switch. Apple and the iPhone is one, I guess. But then I remembered that I’d once done an interview with Gordon Moore, the co-founder of Intel — a company which made that kind of radical switch when moving from making semiconductor memory to making processor chips. And that wasn’t easy either.
I then happened upon on a famous essay – “Seven Chapters of Strategic Wisdom” by Walter Kiechel III — which discusses the Intel experience. Here’s the relevant bit:
Just how difficult it is to pull this off, or to make any major change in strategic direction, is wonderfully captured in “Why Not Do It Ourselves?” the fifth chapter in Andrew S. Grove’s Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company, published in 1996. Grove tells how in 1985 he and Gordon Moore realized that Intel, the company they led, needed to get out of the business on which it was founded, making semiconductor memory chips, to concentrate instead on microprocessors. The reaction they encountered as they navigated their company through this “strategic inflection point” won’t surprise anyone who has tried to effect change in an organization. “How can you even think of doing this?” came the chorus from the heads of the company’s memory-chip operations. “Look at all the nifty stuff we’ve got in the pipeline” (even if we are losing our collective shirt to low-cost Japanese competitors).
Grove and Moore persisted, even though the effort entailed shutting down plants, laying off thousands of employees, and giving up what many thought of as the company’s birthright. Intel’s subsequent success in microprocessors, beginning with its celebrated “386” model, would soon make it the world’s largest semiconductor company. Read over the tale of what it took to get there if, in a delusional moment, you’re ever tempted to think that putting strategy into practice is easy, even a seemingly emergent strategy.
From The Inquirer:
The end result was the researchers had effectively found ways to hack and exploit WhatsApp.
“By decrypting the WhatsApp communication, we were able to see all the parameters that are actually sent between the mobile version of WhatsApp and the Web version. This allowed us to then be able to manipulate them and start looking for security issues,” the researchers explained.
As such, Check Point was able to then carry out three attacks against WhatsApp users, including changing the identity of a sender in a group chat even if they aren’t a member of said chat, changing a correspondent’s reply to effectively fake their response, and sending private messages to a person in a chat group but ensuring that when they respond the whole group sees the reply.
Basically, the attacks could enable malicious actors to sneak into group chats and manipulate conversations and cause communications havoc, and spread misinformation.
Hmmm… They had to do any awful lot of tedious stuff before they were able to pull off those tricks. On the other hand, this is what GCHQ and NSA do all the time, I guess.
Ben Evans is one of the most perceptive observers of the tech industry.
When I bought my first Toyota Prius hybrid many years ago I marvelled at the engineering ingenuity that went into making hybrid tech so seamless. And then realised that (a) Toyota would license the drivetrain to other manufacturers and (b) the technology would eventually be commoditised. So now almost every car manufacturer offers hybrid models even though few of them actually developed the drivetrain themselves. It’s Brian Arthur’s model of technological innovation at work.
The iPhone — multitouch — analogy is useful. Most smartphones are not iPhones, but most of the profits from smartphones are currently captured by Apple. The big question for Tesla is whether — when electric cars become mundane — it can hold onto Apple-scale margins. In that context, you could say that Nissan — with its Leaf — might be the Samsung of the electric car business.
From The Register this morning:
The latest version of TensorFlow can now be run on the Raspberry Pi.
“Thanks to a collaboration with the Raspberry Pi Foundation, we’re now happy to say that the latest 1.9 release of TensorFlow can be installed from pre-built binaries using Python’s pip package system,” according to a blog post written by Pete Warden, an engineer working on the TensorFlow team at Google.
It’s pretty easy to install if you’ve got a Raspberry Pi running Raspbian 9.0 and either Python 2.7 or anything newer than Python 3.4. After that it’s only a few simple lines of code, and you’re done.
Here’s a quick overview on how to install it, it also includes some troubleshooting advice just in case you run into some problems.
This morning’s Observer column:
”Any sufficiently advanced technology,” wrote the sci-fi eminence grise Arthur C Clarke, “is indistinguishable from magic.” This quotation, endlessly recycled by tech boosters, is possibly the most pernicious utterance Clarke ever made because it encourages hypnotised wonderment and disables our critical faculties. For if something is “magic” then by definition it is inexplicable. There’s no point in asking questions about it; just accept it for what it is, lie back and suspend disbelief.
Currently, the technology that most attracts magical thinking is artificial intelligence (AI). Enthusiasts portray it as the most important thing since the invention of the wheel. Pessimists view it as an existential threat to humanity: the first “superintelligent” machine we build will be the beginning of the end for humankind; the only question thereafter will be whether smart machines will keep us as pets.
In both cases there seems to be an inverse correlation between the intensity of people’s convictions about AI and their actual knowledge of the technology…
Interesting thought from Tim Harford in his FT column:
Consider an idea dreamt up in 1978, released in October 1979, and so revolutionary that the journalist Steven Levy could write just five years later: “There are corporate executives, wholesalers, retailers, and small business owners who talk about their business lives in two time periods: before and after the electronic spreadsheet.”
Spreadsheet software redefined what it meant to be an accountant. Spreadsheets were once a literal thing: two-page spreads in a paper ledger. Fill them in, and make sure all the rows and columns add up. The output of several spreadsheets would then be the input for some larger, master spreadsheet. Making an alteration might require hours of work with a pencil, eraser, and desk calculator.
Once a computer programmer named Dan Bricklin came up with the idea of putting the piece of paper inside a computer, it is easy to see why digital spreadsheets caught on almost overnight.
But did the spreadsheet steal jobs? Yes and no. It certainly put a sudden end to a particular kind of task — the task of calculating, filling in, checking and correcting numbers on paper spreadsheets. National Public Radio’s Planet Money programme concluded that in the 35 years after Mr Bricklin’s VisiCalc was launched, the US lost 400,000 jobs for book-keepers and accounting clerks.
Meanwhile, 600,000 jobs appeared for other kinds of accountant. Accountancy had become cheaper and more powerful, so people demanded more of it.
Nice illustration of the complexity of the ‘automation = job-killer’ argument.
From an interesting (if sometimes chaotic) interview by Kara Swisher with Adam Fisher, author of Valley of Genius: The Uncensored History of Silicon Valley:
“Silicon Valley still actually makes things, but less and less. We had an economy that was based on making things first, making chips and then computers, and then making bits of software, and then at some point we started getting everything for free; in quotes, “free.” And it stopped being an economy that made things. It became an economy where people made money by extracting things, by mining data.
So it flipped from a making economy to an extraction economy, and we have all the dysfunction that you would see in a mining site in the third world. Mining economies, extraction economies, are kind of corrupt economies because one person or one company ends up controlling everything.”
Fascinating, rambling interview with stories that sometimes bring one up short. Worth reading (or listening to) in full.
Useful essay in the Guardian by Oscar Schwartz on the clickbait-driven inanity of public discourse about AI. Sample:
Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”.
According to Lipton, in recent years broader interest in topics like “machine learning” and “deep learning” has led to a deluge of this type of opportunistic journalism, which misrepresents research for the purpose of generating retweets and clicks – he calls it the “AI misinformation epidemic”. A growing number of researchers working in the field share Lipton’s frustration, and worry that the inaccurate and speculative stories about AI, like the Facebook story, will create unrealistic expectations for the field, which could ultimately threaten future progress and the responsible application of new technologies.
Good stuff. Lipton’s blog is terrific btw.
One quick tip for improving coverage. Most stuff labelled as “AI” is actually just machine learning. So why not say that?
This morning’s Observer column:
In their book, The Future of Violence, Benjamin Wittes and Gabriella Blum point out that one of the things that made the Roman empire so powerful was its amazing network of paved roads. This network made it easy to move armies relatively quickly. But it also made it possible to move goods around, too, and so Roman logistics were more efficient and dependable than anything that had gone before. Had Jeff Bezos been around in AD125, he would have been the consummate road hog. But in the end, this feature turned out to be also a bug, for when the tide of history began to turn against the empire, those terrific roads were used by the Goths to attack and destroy it.
In a remarkable new paper, Jack Goldsmith and Stuart Russell point out that there’s a lesson here for us. “The internet and related digital systems that the United States did so much to create,” they write, “have effectuated and symbolised US military, economic and cultural power for decades.” But this raises an uncomfortable question: in the long view of history, will these systems, like the Roman empire’s roads, come to be seen as a platform that accelerated US decline?
I think the answer to their question is yes…
This morning’s Observer column:
In 1979, Douglas Hofstadter, an American cognitive scientist, formulated a useful general rule that applies to all complex tasks. Hofstadter’s law says that “It always takes longer than you expect, even when you take into account Hofstadter’s law”. It may not have the epistemological status of Newton’s first law, but it is “good enough for government work”, as the celebrated computer scientist Roger Needham used to say.
Faced with this assertion, readers of Wired magazine, visitors to Gizmodo or followers of Rory Cellan-Jones, the BBC’s sainted technology correspondent, will retort that while Hofstadter’s law may apply to mundane activities such as building a third runway at Heathrow, it most definitely does not apply to digital technology, where miracles are routinely delivered at the speed of light…