Ducking out of the rat-race

Seen in a NYTimes piece by productivity guru David Allen.

I have found that most professionals take action based on whatever is the latest and loudest in their universe, as opposed to a making a conscious, intelligent choice springing from the model I’ve described. This day-to-day, minute-to-minute arena of “reaction versus pro-action” is where the scales tip to “productive” or “unproductive.”

ONE possible path to that feeling of control is to return to a make-it-or-move-it existence. Find work that requires little if any thinking, but merely reacts and responds to what presents itself. That’s a real option: I once met a senior vice president in a global pharmaceutical company who, after taking an early retirement package, became a duck at Disney World. In such a job, it was probably much easier to have a good day at work, and then leave it behind.

Big Ideas in Computer Science

Went to an excellent lecture this morning by Chris Bishop of Microsoft Research. It was part of the wonderful Cambridge Science Festival (of which I’m a Patron) and his topic — “Great Ideas in Computer Science” — is very germane to something I’m writing at the moment.

It’s always intriguing to see what other people regard as key ideas. (I’ve had my own go at this recently in relation to the Internet). So I was agog to see Chris’s list.

They are:

1. Photolithography — the technology that powers Moore’s Law, enabling us to take for granted massive increases in computing power.

2. Algorithms.

3. Cryptography.

4. Machine learning.

5. Biology as computation.

As you’d expect from such an accomplished lecturer (he did the 2008 Royal Institution Christmas Lectures) it was a presentation beautifully tailored to its audience (keen children and their even-keener parents). He illustrated the idea of algorithms by getting five kids to act out a sorting algorithm on stage. For machine learning he had a very nice exposition of how ‘recommender’ engines (eg on Amazon) work. And he had some amazing animated videos of simulations of DNA replication in action.

But for me the best bits of the lecture were:

  • The way he introduced public-key cryptography. I was wondering how he was going to explain all the stuff about multiplying prime numbers etc., but instead he bypassed it brilliantly by doing it with colours. (It’s easy to mix yellow and blue to get green, but ferociously difficult to infer the precise shades of yellow and blue used from the green.)
  • His account of how the XBox Kinect makes accurate spatial inferences from infra-red cameras and parallax (and the role of machine learning in that).
  • His startling assertion that DNA is actually a Turing-complete machine — and the inference that what we currently call “genetic engineering” is really just genetic tinkering and that a future based on computational biology is opening up.
  • I came away brooding about whether the term Computer Science might not be a bit of a misnomer. We use it when trying to persuade the government (and the public) that computing is an academic subject rather than a mere skill (like learning to use Microsoft Excel) because the word ‘science’ involves difficulty, abstraction and law-like generalisations which are dependable, empirically-supported and enduring. But as I walked back to my car I remembered a conversation I had with the late, great Roger Needham in which he argued that what we call “computer science” actually involves an awful lot of “technology” as well as “science”.

    And, in a way, Chris Bishop’s lecture implicitly acknowledged that. Photolithography, for example, is a technology (though one based on the physics of light). Same goes for machine learning. Cryptography is mostly applied mathematics. So we’re left with the question: what is Computer Science? The Oxford CS department says that it’s “about learning and understanding the mathematical, scientific and engineering principles underlying every kind of computing system, from mobile phones and the internet, via systems that interpret natural language, to the supercomputers that forecast tomorrow’s weather or simulate the effects of disease on the human heart.” To be a successful Computer Science student, it continues, “you will need a curiosity about how things work, and the ability to use mathematics to solve problems creatively”. Cambridge describes CS as “a fast-moving field that brings together many disciplines, including mathematics, programming, engineering, the natural sciences, psychology and linguistics”. Carnegie-Mellon says that “Computer science can organize information, build smaller, faster, more secure systems, create channels of communication and delve deep into complex data sets” and goes on to link it to something called “Computational Thinking” — defined as “how computer scientists determine what can be computed and how to compute it. By looking at the world through the lenses of algorithms and abstraction”.

    CMU makes a big deal of this Computational Thinking idea. (The phrase comes from a much-cited editorial in Communications of the ACM in 2006 by Jeanette Wing, a professor at CMU).

    Computational thinking is a way of solving problems, designing systems, and understanding human behavior that draws on concepts fundamental to computer science. To flourish in today’s world, computational thinking has to be a fundamental part of the way people think and understand the world.

    Computational thinking means creating and making use of different levels of abstraction, to understand and solve problems more effectively.

    Computational thinking means thinking algorithmically and with the ability to apply mathematical concepts such as induction to develop more efficient, fair, and secure solutions.

    Computational thinking means understanding the consequences of scale, not only for reasons of efficiency but also for economic and social reasons.

    Hmmm… I’m not sure how I’d explain that to Michael Gove.

    Postscript: Thanks to Miranda Gomperts, who was also at the lecture and provided me with the link to the DNA animations.

    In the world of Big Data the man with only Excel is blind

    This morning’s Observer column.

    One of the most famous quotes in the history of the computing industry is the assertion that “640KB ought to be enough for anybody”, allegedly made by Bill Gates at a computer trade show in 1981 just after the launch of the IBM PC. The context was that the Intel 8088 processor that powered the original PC could only handle 640 kilobytes of Random Access Memory (RAM) and people were questioning whether that limit wasn’t a mite restrictive.

    Gates has always denied making the statement and I believe him; he’s much too smart to make a mistake like that. He would have known that just as you can never be too rich or too thin, you can also never have too much RAM. The computer on which I’m writing this has four gigabytes (GB) of it, which is roughly 6,000 times the working memory of the original PC, but even then it sometimes struggles with the software it has to run.

    But even Gates could not have foreseen the amount of data computers would be called upon to handle within three decades…

    Memories to keep

    Nice blog post by Terry Teachout about throwing stuff away.

     So it was with no small amount of surprise that I found myself confronted the other day with three grocery sacks full of miscellaneous papers retrieved from an old desk I’d left behind in my previous apartment. I’d completely forgotten the contents of that desk, and though I didn’t expect them to include anything important, I thought I ought to give them a quick sifting just to be sure.

    I threw out most of what I found. I saw no reason, for instance, to hang onto a two-inch-thick stack of photocopied pieces I’d written for the New York Daily News during my tenure as its classical music and dance critic, though I did shake my head at the thought of the hundreds of thousands of words I’ve published in the twenty-seven years since my very first concert review appeared in the Kansas City Star. Middle age has its cold consolations, one of which is the knowledge that you’re not nearly as important as you thought you were, or hoped someday to become. I used to save copies of everything I wrote, and for a few years I even kept an up-to-date bibliography of my magazine pieces! Now I marvel at the vanity that once led me to think my every printed utterance worthy of preservation.

    Only one of those pieces held my attention for more than the time it took me to pitch it in the nearest wastebasket: a copy of the first piece I wrote forCommentary, a review of James Baldwin’s The Price of the Ticket published in December of 1985, six months after I moved to New York. I remember how hard I worked on it, and how proud I was to have “cracked” Commentary. Today it sounds hopelessly stiff and earnest, which is why I left it out of the Teachout Reader. What on earth could have possessed Norman Podhoretz to find a place for that immature effort in his book-review section? He told me the first draft was too “knowing,” the best piece of advice any editor has ever given me, and I revised it nervously, hoping to pass muster, never imagining that I would write hundreds more pieces for Commentary, eventually becoming its music critic. Would it have pleased me to know these things back in 1985? Or might it have dulled the tang of my first sale?

    I didn’t expect to find a Metropolitan Opera program among my forgotten papers, though no sooner did I look at it than I knew why I’d saved it. I went to the Metropolitan Opera House on the evening of January 5, 1996, fully expecting to review the company premiere of Leos Janacek’s The Makropulos Case for the Daily News. Instead, I ended up writing a front-page story about how one of the singers in the production died on stage, a minute and a half into the first act. The opening scene of The Makropulos Case is set in a law office where Vitek, a clerk, is looking up the files for a suit that has been dragging on for close to a century. To symbolize the tortuous snarl of Gregor v. Prus, designer Anthony Ward turned the entire back wall of the set into a forty-foot-high filing cabinet containing hundreds of drawers. Enter Vitek, played by a character tenor named Richard Versalle. As the curtain rose, he made his entrance, climbed up a tall ladder and pulled a file out of one of the drawers. “Too bad you can only live so long,” he sang in Czech. Then he let go of the ladder and fell mutely to the stage, landing on his back with a terrible crash.

    Three thousand people gasped. David Robertson, the conductor, waved the orchestra to a halt and shouted, “Are you all right, Richard?” Versalle didn’t speak or move, and the curtain was quickly lowered. I sat frozen in my aisle seat, stunned by what I had seen. Then I pulled myself together and ran to the press room to find out what had happened. A company spokesman told the rapidly growing band of critics and hangers-on what little he knew: Versalle had been rushed by ambulance to the nearest hospital. We started firing questions at him. How old was Versalle? When did he make his Met debut? Did he have a wife and children? I scribbled the answers (63, 1978, yes) on my program and pushed through the crowd to the nearest pay phone, where I dropped a quarter in the slot, dialed the number of the Daily News city desk, and spoke three words that had never before crossed my lips other than in jest: “Get me rewrite.” Eight years later, I leafed through the program of that unfinished performance, looking at my barely decipherable notes. As souvenirs go, it was a good one, and I decided to keep it.

     

    What kind of capitalism?

    Interesting column in the New York Times by Tom Friedman.

     David Rothkopf, the chief executive and editor-at-large of Foreign Policy magazine, has a smart new book out, entitled “Power, Inc.,” about the epic rivalry between big business and government that captures, in many ways, what the 2012 election should be about — and it’s not “contraception,” although the word does begin with a “C.” It’s the future of “capitalism” and whether it will be shaped in America or somewhere else.

    Rothkopf argues that while for much of the 20th century the great struggle on the world stage was between capitalism and communism, which capitalism won, the great struggle in the 21st century will be about which version of capitalism will win, which one will prove the most effective at generating growth and become the most emulated.

    “Will it be Beijing’s capitalism with Chinese characteristics?” asks Rothkopf. “Will it be the democratic development capitalism of India and Brazil? Will it be entrepreneurial small-state capitalism of Singapore and Israel? Will it be European safety-net capitalism? Or will it be American capitalism?” It is an intriguing question, which raises another: What is American capitalism today, and what will enable it to thrive in the 21st century?

    Rothkopf’s view, which I share, is that the thing others have most admired and tried to emulate about American capitalism is precisely what we’ve been ignoring: America’s success for over 200 years was largely due to its healthy, balanced public-private partnership — where government provided the institutions, rules, safety nets, education, research and infrastructure to empower the private sector to innovate, invest and take the risks that promote growth and jobs.

    When the private sector overwhelms the public, you get the 2008 subprime crisis. When the public overwhelms the private, you get choking regulations. You need a balance, which is why we have to get past this cartoonish “argument that the choice is either all government or all the market,” argues Rothkopf. The lesson of history, he adds, is that capitalism thrives best when you have this balance, and “when you lose the balance, you get in trouble.”

    Must have a look at Rothkopf’s book.