Forty years on

Today is the 40th anniversary of the first Request For Comment (RFC) — the form devised by the ARPANET’s designers for discussing technical issues. Steve Crocker — who as a graduate student invented the idea — has written a lovely piece about it in the New York Times:

A great deal of deliberation and planning had gone into the network’s underlying technology, but no one had given a lot of thought to what we would actually do with it. So, in August 1968, a handful of graduate students and staff members from the four sites began meeting intermittently, in person, to try to figure it out. (I was lucky enough to be one of the U.C.L.A. students included in these wide-ranging discussions.) It wasn’t until the next spring that we realized we should start writing down our thoughts. We thought maybe we’d put together a few temporary, informal memos on network protocols, the rules by which computers exchange information. I offered to organize our early notes.

What was supposed to be a simple chore turned out to be a nerve-racking project. Our intent was only to encourage others to chime in, but I worried we might sound as though we were making official decisions or asserting authority. In my mind, I was inciting the wrath of some prestigious professor at some phantom East Coast establishment. I was actually losing sleep over the whole thing, and when I finally tackled my first memo, which dealt with basic communication between two computers, it was in the wee hours of the morning. I had to work in a bathroom so as not to disturb the friends I was staying with, who were all asleep.

Still fearful of sounding presumptuous, I labeled the note a “Request for Comments.” R.F.C. 1, written 40 years ago today, left many questions unanswered, and soon became obsolete. But the R.F.C.’s themselves took root and flourished. They became the formal method of publishing Internet protocol standards, and today there are more than 5,000, all readily available online.

But we started writing these notes before we had e-mail, or even before the network was really working, so we wrote our visions for the future on paper and sent them around via the postal service. We’d mail each research group one printout and they’d have to photocopy more themselves.

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard…

The RFC archive is here.

Relationship Symmetry in Social Networks

Interesting analysis by Joshua Porter of the difference between Twitter and FaceBook.

In general, there are two ways to model human relationships in software. An “asymmetric” model is how Twitter currently works. You can “follow” someone else without them following you back. It’s a one-way relationship that may or may not be mutual.

Relationship Symmetry in the Facebook model

Facebook, on the other hand, has always used a “symmetric” model, where each time you add someone as a friend they have to add you as a friend as well. This is a two-way relationship, and it is required to have any relationship at all. So as a Facebook user there is always a 1-1 relationship among your friends. Everyone who you have claimed as a friend has also claimed you as a friend.

The post goes on to cite Andrew Chen’s point that Twitter allows 4 types of relationships, while Facebook only allows for two. The two relationships of Facebook are “friend and Not Friend”. The four relationships of Twitter are:

1. People who follow you, but you don’t follow back
2. People who don’t follow you, but you follow them
3. You both follow each other (Friends!)
4. Neither of you follow each other

As Andrew points out, an asymmetric model allows for more types of relationships. I think the benefits go further than that. I think that the asymmetric model better mimics how real attention works…and how it has always worked.

Libraries: suddenly popular again — until they’re cut back in spending freezes

Somebody (I think it was Robert Darnton) made the point at the JISC ‘Libraries of the Future’ event in Oxford last week that the economic downturn is leading to a noticeable increase in the number of people using public libraries in the US. Here’s a blog post tot he same effect.

Lines around the building, bodies asleep on their bags, staff looking frazzled and dazed. No, this is not the local Greyhound station, it’s the most recent iteration of your neighborhood temple of wisdom, the public library.

With resources that could prove key in getting back to work, public libraries are seeing a significant uptick in patronage at the same time they are facing funding cuts. In the last year, the New York City system has experienced a 12 percent increase in patronage and a 17 percent increase in circulation that has spiked to 30 percent in areas like the Bronx.

Herb Scher, Director of Public Relations for New York Public Libraries, said the influx in patrons can be attributed to many downturn-related factors. Some new patrons are seeking resume help. Other are borrowing DVDs because renting them has become prohibitive. “We are looking to preserve as much service as we can,” Scher said in a telephone conversation.

Meanwhile, the city’s system is facing a $23.3 million cutback in June. If the measure passes, Scher says a 20 percent reduction in hours would follow.

The Digger wants to give up Googlejuice.

Funny to see the Dirty Digger and arch-libertarian Henry Porter climbing onto the same mattress, but life’s like that sometimes. History’s littered with strange alliances. Here’s Forbes.com’s take on it:

Rupert Murdoch threw down the gauntlet to Google Thursday, accusing the search giant of poaching content it doesn’t own and urging media outlets to fight back. “Should we be allowing Google to steal all our copyrights?” asked the News Corp. chief at a cable industry confab in Washington, D.C., Thursday. The answer, said Murdoch, should be, “Thanks, but no thanks.”

Google sees it differently. They send more than 300 million clicks a month to newspaper Web sites, says a Google spokesperson. The search giant is in “full compliance” with copyright laws. “We show just enough information to make the user want to read a full story–the headlines, a line or two of text and links to the story’s Web site. That’s it. For most links, if a reader wants to peruse an entire article, they have to click through to the newspaper’s Web site.”

Later in the piece Anthony Moor, deputy managing editor of the Dallas Morning News Online and a director of the Online News Association is quoted as saying:

“I wish newspapers could act together to negotiate better terms with companies like Google. Better yet, what would happen if we all turned our sites off to search engines for a week? By creating scarcity, we might finally get fair value for the work we do.”

Now that would be a really interesting experiment. If I were the Guardian and the BBC I’d be egging these guys on. It’d provide an interesting case study in how to lose 50% market share in a week or two.

UPDATE: Anthony Moor read the post and emailed me to say that Forbes’s story presented an unduly simplistic version of his opinion:

Just to clarify, I’m not one of those who think Google is the death of newspapers. Quite the contrary, I emphasized to reporter Dirk Smillie that search engines are the default home page for people using the Internet, and as such, direct a lot of traffic to us. That traffic is important. I don’t believe Google is “stealing” our content. And I was being a bit tongue-in-cheek about “turning off” to Google. We don’t matter much to Google. I was musing about what might happen if all news sites turned off for a week. What would people think? Would they survive? (Maybe.) I wasn’t suggesting we block Google from spidering our content. That wouldn’t test the “what if digital news went dark” hypothesis. In any case, none of that will fix our own broken business model.

Google organizes the Web. Something needs to do that. My concern is that they’re effectively a monopoly player in that space. Oh sure, there’s Yahoo, but who “Yahoos” information on the Web? I understand and recognize the revolutionary nature of the link economy, but I’m concerned that it’s Google which defines relevance via their algorithms. (Yes, I know that they’re leveraging what people have chosen to make relevant, but they’re still applying their own secret sauce, which is why we all game it with SEO efforts) and that puts the rest of us in a very subservient position.

I wonder if there isn’t another way in which the Web can be organized and relevance gained that reduces the influence of Google and returns some of the value that Google is reaping for the rest of us? I predict that someday there will be and all this talk of Google’s dominance will be history.

STILL LATER: At the moment, there’s a very low signal-to-noise ration in this debate : everyone has opinions but nobody knows much, and it’d be nice to find some way of extracting some nuggets of hard, reliable knowledge on which we could all agree. An experiment in which major news sources turned off their online presence for a week or two might be useful in that context. And it might enable us to move on from the current yah-boo phase. It would enable us to assess, for example, the extent to which the blogosphere is really parasitic on the traditional news media. My view (for what it’s worth) is that the relationship is certainly symbiotic, but that the blogosphere is more free-standing than print journalists tend to assume. The experiment would shed some light on that.

802.11e?

Where ‘e’ stands for embarrassment. Further to my post about the ingenious Eye-Fi card, Bill Thompson (whom God Preserve) emailed me with this lovely story:

I was at a conference in Florida last year chatting to someone from [company] who had an eye-fi card in his digital camera and loved it. But he pointed out a potential problem… a friend of his had asked to borrow his camera, and he had forgotten to mention the wifi link, only to be somewhat surprised later that day to find pornographic images of the friend’s partner appearing on his laptop as the card had found an open wireless network and was doing its job…

Saving Thunderbird

Thoughtful article by Glyn Moody.

Email is dying. Time and again I come across comments to the effect that people have given up on their email inbox, and simply junked their messages. Increasingly, people are turning to Twitter, Facebook and LinkedIn as their messaging medium. It’s not hard to see why. These are opt-in services: you get to choose who can contact you, unlike email.

This has led to the scourge of spam, which now represents 94% of all email, according to Google’s Postini subsidiary. A classic Tragedy of the Commons has resulted, whereby a few selfish individuals exploit and ultimately destroy a resource used by all. Sadly, it looks like the battle against spam is lost; even though services like Gmail offer extremely efficient filtering in my experience, it’s a poor substitute for a messaging service that can assume that you want to see everything that is sent to you, because only people of interest are allowed to contact you.

The more Facebook and Twitter spread, the more people will be turning to these opt-in networks for their communications; email, as a result, will dwindle in importance, turning into a kind of digital wasteland inhabited mostly by those too poor, uninformed or lazy to move on, and by spamming parasites who prey on them. I don’t imagine that Thunderbird wishes to become the software of choice for either…

This makes sense. As our communications ecosystem evolves, so too should the software. From now on we will need comms clients wwhich do everything — including email. I guess that’s where Tweetdeck et al are headed. Maybe that’s how Thunderbird should evolve?

Eye-Fi

Hmmm… If I’d come on this on April 1 I’d have thought it was a good spoof. But it seems to be real.

The Eye-Fi Card stores photos & videos like a traditional memory card, and fits in most cameras. When you turn your camera on within range of a configured Wi-Fi network, it wirelessly transfers your photos & videos. Better yet: you can automatically have them sent to your computer (PC or Mac), or to your favorite photo sharing website – or both!

As far as I can see, the Eye-Fi to Flickr link only works in the US. (It’s a bit like the Amazon Kindle in that respect.) But it still looks like a really neat idea.

Thanks to Rory Cellan-Jones for the original link.

The consolations of ignorance

It’s always agreeable to find idiots talking nonsense. But it’s depressing to find good people doing it. Henry Porter has done great work in defence of liberty in Britain, but he’s written a truly idiotic rant this morning about Google. I was particularly struck by this passage.

One of the chief casualties of the web revolution is the newspaper business, which now finds itself laden with debt (not Google’s fault) and having to give its content free to the search engine in order to survive. Newspapers can of course remove their content but then their own advertising revenues and profiles decline. In effect they are being held captive and tormented by their executioner, who has the gall to insist that the relationship is mutually beneficial. Were newspapers to combine to take on Google they would be almost certainly in breach of competition law.

Then he invokes (who else?) Why, our old friend Thomas Jefferson:

In 1787 Thomas Jefferson wrote: “Were it left to me to decide whether we should have a government without newspapers or newspapers without a government, I should not hesitate to prefer the latter.” A moment’s thought must tell us that he is still right: newspapers are the only means of holding local hospitals, schools, councils and the police to account, and on a national level they are absolutely essential for the good functioning of democracy.

Well, up to a point, Lord Porter. I’m be all in favour of newspapers that perform that noble function. The only problem is that 95% of them haven’t performed it for decades, if ever. Mostly they operated by printing as much crap as could fit between the advertisements. When Craigslist took away the ads they were left with only the crap — for which, oddly enough, customers are reluctant to pay.

The annoying thing about Porter’s piece is that there are really good grounds (e.g. these) for being worried about Google. But they have almost nothing to do with its impact on print newspapers, which would have withered of their own accord because of the way the Internet dissolved their value chains. Google is a monopoly that will present the Obama administration with its first serious anti-trust headache. If they thought that General Motors was too big to fail, just imagine what they will face when the time comes to take on Google.