Forty years on

Today is the 40th anniversary of the first Request For Comment (RFC) — the form devised by the ARPANET’s designers for discussing technical issues. Steve Crocker — who as a graduate student invented the idea — has written a lovely piece about it in the New York Times:

A great deal of deliberation and planning had gone into the network’s underlying technology, but no one had given a lot of thought to what we would actually do with it. So, in August 1968, a handful of graduate students and staff members from the four sites began meeting intermittently, in person, to try to figure it out. (I was lucky enough to be one of the U.C.L.A. students included in these wide-ranging discussions.) It wasn’t until the next spring that we realized we should start writing down our thoughts. We thought maybe we’d put together a few temporary, informal memos on network protocols, the rules by which computers exchange information. I offered to organize our early notes.

What was supposed to be a simple chore turned out to be a nerve-racking project. Our intent was only to encourage others to chime in, but I worried we might sound as though we were making official decisions or asserting authority. In my mind, I was inciting the wrath of some prestigious professor at some phantom East Coast establishment. I was actually losing sleep over the whole thing, and when I finally tackled my first memo, which dealt with basic communication between two computers, it was in the wee hours of the morning. I had to work in a bathroom so as not to disturb the friends I was staying with, who were all asleep.

Still fearful of sounding presumptuous, I labeled the note a “Request for Comments.” R.F.C. 1, written 40 years ago today, left many questions unanswered, and soon became obsolete. But the R.F.C.’s themselves took root and flourished. They became the formal method of publishing Internet protocol standards, and today there are more than 5,000, all readily available online.

But we started writing these notes before we had e-mail, or even before the network was really working, so we wrote our visions for the future on paper and sent them around via the postal service. We’d mail each research group one printout and they’d have to photocopy more themselves.

The early R.F.C.’s ranged from grand visions to mundane details, although the latter quickly became the most common. Less important than the content of those first documents was that they were available free of charge and anyone could write one. Instead of authority-based decision-making, we relied on a process we called “rough consensus and running code.” Everyone was welcome to propose ideas, and if enough people liked it and used it, the design became a standard…

The RFC archive is here.

Eye-Fi

Hmmm… If I’d come on this on April 1 I’d have thought it was a good spoof. But it seems to be real.

The Eye-Fi Card stores photos & videos like a traditional memory card, and fits in most cameras. When you turn your camera on within range of a configured Wi-Fi network, it wirelessly transfers your photos & videos. Better yet: you can automatically have them sent to your computer (PC or Mac), or to your favorite photo sharing website – or both!

As far as I can see, the Eye-Fi to Flickr link only works in the US. (It’s a bit like the Amazon Kindle in that respect.) But it still looks like a really neat idea.

Thanks to Rory Cellan-Jones for the original link.

The Wikipedia ‘debate’: time to move on

This morning’s Observer column.

Unwillingness to entertain the notion that Wikipedia might fly is a symptom of what the legal scholar James Boyle calls ‘cultural agoraphobia’ – our prevailing fear of openness. Like all phobias it’s irrational, so is immune to evidence. I’m tired of listening to brain-dead dinner-party complaints about how ‘inaccurate’ Wikipedia is. I’m bored to death by endless accounts of slurs or libels suffered by a few famous individuals at the hands of Wikipedia vandals. And if anyone ever claims again that all the entries in Wikipedia are written by clueless amateurs, I will hit them over the head with a list of experts who curate material in their specialisms. And remind them of Professor Peter Murray-Rust’s comment to a conference in Oxford: “The bit of Wikipedia that I wrote is correct.”

Of course Wikipedia has flaws, of course it has errors: show me something that doesn’t. Of course it suffers from vandalism and nutters who contribute stuff to it. But instead of complaining about errors, academics ought to be in there fixing them. Wikipedia is one of the greatest inventions we have. Isn’t it time we accepted it? Microsoft has.

The downside of URL shorteners

Very thoughtful post by Joshua Schachter.

The worst problem is that shortening services add another layer of indirection to an already creaky system. A regular hyperlink implicates a browser, its DNS resolver, the publisher’s DNS server, and the publisher’s website. With a shortening service, you’re adding something that acts like a third DNS resolver, except one that is assembled out of unvetted PHP and MySQL, without the benevolent oversight of luminaries like Dan Kaminsky and St. Postel.

There are three other parties in the ecosystem of a link: the publisher (the site the link points to), the transit (places where that shortened link is used, such as Twitter or Typepad), and the clicker (the person who ultimately follows the shortened links). Each is harmed to some extent by URL shortening.

The transit’s main problem with these systems is that a link that used to be transparent is now opaque and requires a lookup operation. From my past experience with Delicious, I know that a huge proportion of shortened links are just a disguise for spam, so examining the expanded URL is a necessary step. The transit has to hit every shortened link to get at the underlying link and hope that it doesn’t get throttled. It also has to log and store every redirect it ever sees.

The publisher’s problems are milder. It’s possible that the redirection steps steals search juice — I don’t know how search engines handle these kinds of redirects. It certainly makes it harder to track down links to the published site if the publisher ever needs to reach their authors. And the publisher may lose information about the source of its traffic.

But the biggest burden falls on the clicker, the person who follows the links. The extra layer of indirection slows down browsing with additional DNS lookups and server hits. A new and potentially unreliable middleman now sits between the link and its destination. And the long-term archivability of the hyperlink now depends on the health of a third party…

I hadn’t thought of this, and indeed have been cheerfully using bit.ly without thinking about the consequences. And then I came on this perceptive post by Om Malik on the business model underpinning bit.ly:

Yesterday, New York-based startup incubator Betaworks raised $2 million in funding for its URL-shortener project, Bit.ly, and spun it out as an independent company. The funding raised some eyebrows, with some speculating if Bit.ly, one of the dozens of link-shortening services, was worth a rumored $8 million. I fall in the camp of those who think Bit.ly is worth the money.

Here’s why: The most important aspect of Bit.ly is not that it can shorten URLs. Instead its real prowess lies in its ability to track the click-performance of those URLs, and conversations around those links. It doesn’t matter where those URLs are embedded — Facebook, Twitter, blogs, email, instant messages or SMS messages — a click is a click and Bit.ly counts it, in real time. Last week alone, nearly 25 million of these Bit.ly URLs were clicked.

By clicking on these URLs, people are essentially voting on the stories behind these links. Now if Bit.ly collated all these links and ranked them by popularity, you would have a visualization of the top stories across the web. In other words, it would be a highly distributed form of Digg.com, the social news service that depends on people submitting and voting for stories from across the web. Don’t be surprised if Bit.ly formally launches such as an offering real soon. This will help them monetize their service via advertising…

Microsoft Encarta succumbs to Wikipedia

Well, it was a long time coming, but here it is.

Do you remember what came in between printed encyclopedias and Wikipedia Wikipedia reviews? For many, the answer is Microsoft Encarta, which was distributed starting in the 90s via CD-ROM and more recently on the Web via MSN. Today, Microsoft announced that it’s discontinuing Encarta later this year, offering symbolic confirmation that Wikipedia is the world’s definitive reference guide.

Microsoft acknowledges as such in an FAQ they’ve setup explaining the move and what existing Encarta customers can expect. The company writes, “Encarta has been a popular product around the world for many years. However, the category of traditional encyclopedias and reference material has changed. People today seek and consume information in considerably different ways than in years past.”

That’s quite the understatement. As PaidContent points out, the crowd-edited Wikipedia boasts 2.7 million entries in English versus just 42,000 for Encarta. Need further confirmation of why Wikipedia is simply a better model? News of Encarta’s discontinuation has already reached the product’s entry on Wikipedia.

Increasing online giving by intelligent design

I’m not an unqualified admirer of Jakob Neilsen’s work, but this Alertbox post makes a lot of sense. What he was trying to find out is what affects online donors’ decisions

We asked participants what information they want to see on non-profit websites before they decide whether to donate. Their answers fell into 4 broad categories, 2 of which were the most heavily requested:

* The organization’s mission, goals, objectives, and work.

* How it uses donations and contributions.

That is: What are you trying to achieve, and how will you spend my money?

Sadly, only 43% of the sites we studied answered the first question on their homepage. Further, only a ridiculously low 4% answered the second question on the homepage. Although organizations typically provided these answers somewhere within the site, users often had problems finding this crucial information.

As we’ve long known, what people say they want is one thing. How they actually behave when they’re on websites is another. Of the two, we put more credence in the latter. We therefore analyzed users’ decision-making processes as they decided which organizations to support.

In choosing between 2 charities, people referred to 5 categories of information. However, an organization’s mission, goals, objectives, and work was by far the most important. Indeed, it was 3.6 times as important as the runner-up issue, which was the organization’s presence in the user’s own community.

(Information about how organizations used donations did impact decision-making, but it was far down the list relative to its second-place ranking among things that people claimed that they'd be looking for.)

People want to know what a non-profit stands for, because they want to contribute to causes that share their ideals and values. Most people probably agree that, for example, it’s good to help impoverished residents of developing countries or patients suffering from nasty diseases. Many organizations claim to do these very things. The question in a potential donor’s mind is how the organization proposes to help. Often, sites we studied failed to answer this question clearly — and lost out on donations as a result…

A new Internet Typology

The Pew Internet and American Life project has come up with a new typology of technology users (and avoiders). Highlights:

  • Digital Collaborators: 8% of adults use information gadgets to collaborate with others and share their creativity with the world.
  • Ambivalent Networkers: 7% of adults heavily use mobile devices to connect with others and entertain themselves, but they don’t always like it when the cell phone rings.
  • Media Movers: 7% of adults use online access to seek out information nuggets, and these nuggets make their way through these users’ social networks via desktop and mobile access.
  • Roving Nodes: 9% of adults use their mobile devices to connect with others and share information with them.
  • Mobile Newbies: 8% of adults lack robust access to the internet, but they like their cell phones.
  • Desktop Veterans: 13% of adults are dedicated to wireline access to digital information, and like how it opens up the pipeline to information for them.
  • Drifting Surfers: 14% of adults are light users — despite having a lot of ICTs — and say they could do without modern gadgets and services.
  • Information Encumbered: 10% of adults feel overwhelmed by information and inadequate to troubleshoot modern ICTs.
  • The Tech Indifferent: 10% of adults are unenthusiastic about the internet and cell phone.
  • Off the Network: 14% of adults are neither cell phone users nor internet users.
  • Pew provide a quiz designed to help you assess where you fit in this classification system.

    (Footnote: I’m a ‘digital collaborator’, apparently.)

    What’s the point of technology, really?

    Mark Anderson in thoughtful mood.

    What is the fascination with technology today? Who needs another megahertz of this, or to shrink that a bit more, or to cut another ten percent off the production cost? Why would anyone care?

    Oh, but this screen does this, and that drive is a little faster, and this flash chip is cheaper this year, and IBM is said to be monopolizing mainframes while Rackspace commoditizes servers. Really?

    Without application to human needs, the thrill quickly wears off. Yes, when it can do something really meaningful, like provide food to a village, or health care, or clean water, then technology really is magic. But, after all the stories of this kind, how often does this really happen? Like the short-queens of the hedge fund crowd, aren’t we really, ultimately, just mostly messing with each other, on someone else’s nickel? Is it a game? And, if so, is it a game with a hidden cost as large as the hedge queens’?

    What can we do to make technology, or anything, meaningful ? Maybe we need to re-allocate our teams, and put more emphasis on revolution, on real science, and less on evolution, or incremental change. Will technology be the answer to the world’s energy problems? Or will we discover that Clean Coal is really nothing but a PR ploy? How many of us are working on real problems, and how many on improving the next MP3 player? Can we tell the difference?

    A sobering question for those of us who gambol delightedly in fields of gadgetry. Also I wonder what the gender dimension of this is: although there are some very distinguished women in this space (I think, for example, of Karlin Lillington and Laura James and the late, great Karen Sparck-Jones) it seems a predominately male playground. And I’m reminded of a lovely story Dave Barry told years ago when the Humvee was first released in civilian form and he was given one for a day. He relates how he proudly took his wife for a drive.

    “So what can it do?” she asked.
    “Lots of cool stuff” replied Dave.
    “Like what?”
    “Well”, said Dave, “I can inflate or deflate the tyres while we’re driving along.
    “Why?” asked his wife.

    He had no answer. I suspect that lots of us are really in that position. The stuff is endlessly fascinating, sure. But does it really matter? Isn’t much of it just leading-edge uselessness?

    Zittrain unpacked

    Every so often, a group of my Open University colleagues gathers to discuss a book that one of us regards as important or interesting. Last week it was my turn to talk about Jonathan Zittrain’s The Future of the Internet — and how to stop it. The mp3 of the talk is here. The sound quality is variable, I’m afraid, and I only had one microphone, so it’s not Radio 4 quality. It runs for about an hour and includes a delicious excerpt from James Boyle’s recent RSA lecture.

    If you’re listening to it, you might find the slide below helpful.

    Alternatively, you might find it a cure for insomnia.

    And if you’re podcast-averse, Doug Clow did an excellent live blog of the talk, for which many thanks to him.