Zuck’s 2012 personal income tax bill: $1.5 billion

Spare a thought for the poor wee laddie. This from The Register.

If all goes according to plan, Facebook founder, chairman, and CEO Mark Zuckerberg’s share of the profit in his company’s upcoming initial public offering will result in him facing a tax bill of around $1.5bn for 2012.

What’s more, the Financial Times reports, that astronomical bill could increase if the IPO is more successful than many analysts believe.

Currently, the FT says, the stock is selling for around $40 per share in private secondary markets – a level that would net Lucky Zucky around $4.8bn at the IPO. Should the shares rise to a level that would put Facebook’s valuation at around $100bn – far from impossible in this wacky, bubbly world – the 27-year-old soon-to-be-squillionaire could net $6bn.

The company’s Form S-1 SEC filing earlier this week, which revealed the IPO plans, notes that Zuckerberg will use “substantially all of the net proceeds” that he will receive upon the sale of an as-yet undisclosed number of shares in the IPO to “satisfy taxes that he will incur upon his exercise of an outstanding stock option to purchase 120,000,000 shares of our Class B common stock.”

It’s nice to know that Zuckerberg’s wealth-management team is thinking ahead.

Those 120 million fully vested options that Zuckerberg plans to exercise, by the way, will cost him 6¢ per share. Remember that pittance when you read news of Facebook’s stock price on IPO day, which has yet to be scheduled.

As a result of the convoluted morass that is the US tax code, the profits that Zuckerberg will realize from excercising his options will be taxed as regular income, and not as capital gains…

Real hero of the Facebook story isn’t Zuckerberg

My take on the Facebook story — from yesterday’s Observer.

The number to watch is not the putative $100bn valuation but the 845 million users that Facebook now claims to have. The observation that if Facebook were a country then it would be the third most populous on the planet has become a cliche, but underpinning it is an intriguing question: how did an idea cooked up in a Harvard dorm become so powerful?

Thanks to a compelling movie, The Social Network, we think we know the story. A ferociously gifted Harvard sophomore named Zuckerberg has difficulties with women and vents his frustration by creating an offensive web application that invites users to compare pairs of female students and indicate which is “hottest”. He puts this up on the Harvard network where it gets him into trouble with the authorities. Then he lifts an idea from a pair of nice-but-dim Wasp contemporaries who need a programmer and, in a frenzied burst of inspired hacking, implements the idea in computer code, thereby creating an online version of the printed “facebooks” common to elite US universities. This he then launches on an unsuspecting world. The Wasps sue him but lose (though get a settlement). Zuckerberg goes on to become Master of the Universe. Cue music, fade to black.

It’s all true, sort of, but the dramatic imperatives of the narrative obscure the really significant bit of the story. So let’s rewind…

TCP/IP and the dim future of universities

This week’s Observer column.

Once upon a time, a very long time ago, in 1995 to be precise, a scholar named Eli Noam published an article in the prestigious journal Science under the title “Electronics and the Dim Future of the University”. In it, Professor Noam argued that the basic model of a university – which had been stable for hundreds of years – would be threatened by networked communications technologies.

Under the classical model, universities were institutions that created, stored and disseminated knowledge. If students or scholars wished to access that knowledge, they had to come to the university. But, Noam argued, the internet would threaten that model by raising the question memorably posed by Howard Rheingold in the 1980s: “Where is the Library of Congress when it’s on my desktop?” If all the world’s stored knowledge can be accessed from any networked device, and if the teaching materials and lectures of the best scholars are likewise available online, why should students pay fees and incur debts to live in cramped accommodation for three years? What would be the USP of the traditional university when its monopolies on storage and dissemination eroded?

If that was a good question in 1995, it’s an even better one today…

News you can’t live without

From today’s NYTimes.

FOR barristers in 18th-century London, it was shoulder-grazing wigs. For the Mad men of 1950s New York, it was briefcases and fedoras. For the glass-ceiling-shattering women of the 1980s, it was shoulder pads.

And for today’s tech entrepreneurs in high-flying Silicon Valley, it is flamboyantly colored, audaciously patterned socks.

In a land where the uniform — jeans, hoodies and flip-flops — is purposefully nonchalant, and where no one would be caught dead in a tie, wearing flashy socks is more than an expression of your personality. It signals that you are part of the in crowd. It’s like a secret handshake for those who have arrived, and for those who want to.

“I have been in meetings where people look down and notice my socks, and there is this universal sign, almost like a gang sign, where they nod and pull up their pant leg a little to show off their socks,” said Hunter Walk, 38, a director of product management at YouTube, whose favorite pair is yellow, aqua and orange striped.

Note: the New York Times is serious newspaper.

Democratising web streaming

Very interesting development. At the moment, webcasting is great but requires significant resources (server and bandwidth) to do it on any scale. This could put it within the reach of just about everybody. It’s not ready yet, but should be out by the Summer.

Are we really “evolution’s biggest mistake”?

To Corpus Christi for a CSaP lecture by Jaan Tallinn, Chief Engineer of Skype. Since he’s the Estonian programmer behind Kazaa (formerly the scourge of the music industry) and then a lead architect of Skype, I expect him to be talking about VoIP or some such geeky topic. He’s a big name in these circles and he plays to a packed house.

But it turns out that he doesn’t want to talk about geeky stuff and instead launches into a fascinating but wayward excursion into Kurzweil territory. He gets there via an unusual route, though: by arguing that, essentially, the human brain was evolution’s biggest mistake, because it has enabled us to divert the natural course of things with our infernal ingenuity — with potentially disastrous consequences. This is routine stuff for some audiences — for example those who share James Lovelock’s views about global warming. But it’s not CO2 emissions that bother Tallinn: it’s the ‘singularity’ that also obsesses Kurzweil. In other words, he extrapolates the increasing ‘intelligence’ and processing power of computers to the point where we will have created artificial intelligences that are smarter than us and which will have no further use for humans, save perhaps as pets. At which point I hear echoes of Bill Joy’s famous essay, “Why the Future Doesn’t Need Us”, and begin to wonder if this software wizard hasn’t, well, ventured into philosophical territory without even a rudimentary map.

But Tallinn is an entertaining speaker (and the only presenter I’ve ever seen who can actually use Prezi to good effect) so most of us temporarily suspend disbelief and stay connected. His central idea is of an “intelligence stairway” — a series of steps starting with self-replication leading to evolution leading to humans leading to tech progress leading to “artificial general intelligence” (AGI) and thence to an “intelligence explosion” which leads to the Kurzweil Singularity. Tallinn thinks (via reasoning that I can’t follow) that what follows next is “environmental catastrophe”. Is this because machines will be unconcerned about global warming, because they are capable of surviving it whereas organic life is not? Who knows? **See footnote**

The audience is intrigued but unconvinced. One attendee is sceptical to the point almost of derision: he doesn’t buy into Tallinn’s account of computational progress (which lays great stress on computers’ ability to play world-class chess, and he thinks that Tallinn’s citation of Apple’s SIRI as an illustration of how far computers have come in understanding people is way overblown. Another sceptic (I think an economist) takes the line that it’s difficult to see computers being able to understand context, and so the only we need to take in AI research is to make sure that they never do!

I am likewise entertained but unconvinced. But I am struck by one thought, namely that there are areas of scientific research where we do worry about a ‘stairway’ of the kind sketched by Tallinn — biotechnology and genetic engineering in particular, and also perhaps nanotechnology. Maybe we should do some thinking about the work that the 300 or so researchers working on AGI are doing? And is the reason why we don’t take the threat of AGI seriously the fact that, deep down, we simply can’t conceive of machines that are smarter than us? We have no problem envisaging scenarios in which, say, nanotechnology or genetically-modified organisms might run out of control and give rise to horrible unintended consequences. But computing machines…???

But it was an entertaining and thought-provoking lecture. On my way out through the throng of Cambridge academics and geeks engaging in the social activity quaintly known as “networking” I am suddenly struck by a vague memory from my past. I too once gave a lecture to a packed house. The audience appeared to love it and applauded loudly at the end. As I was leaving the theatre I noticed that one of my academic colleagues had been lounging at the back. “Very good lecture”, he said. “Just the right number half-truths”.

**Footnote

My colleague Anil Madhavapeddy was also there and writes:

“He [Tallinn] is falling into his own trap: any sufficiently advanced AI would
maintain itself until it can find a more algorithmically efficient source
of resources than earth (i.e., gas giants! space!) and would not work on
human timescales (whats the rush?).

On the other hand, one can imagine very easily a computing virus such as
Stuxnet II wiping out life on earth, due to it causing some cyberphysical
system to go ballistic and trigger something off by mistake. Not advanced
AI, just plain old insecure computer systems, and this does need fixing
urgently, and the GAI topic is an unfortunate distraction.”