The e-book phenomenon

The Times asked me to write a piece about the e-book phenomenon, so I did. Sample:

Two factors will limit the size of the e-book market. One is that reading substantial amounts of text on a screen is a masochistic, headache- inducing experience that makes one appreciate the merits of paper: high resolution and low power consumption; great portability and infinite flexibility. And it will still function after you’ve poured a cup of coffee over it.

The other reason e-books won’t become dominant is that they usually embody tiresome “digital rights management” (copy-protection) systems. Publishers love DRM because it gives them control. Consumers hate it because it takes away time-honoured freedoms. If you buy a printed book, for example, you can resell it, lend it to a friend or donate it to the school jumble sale. But the licensing and DRM provisions on many e-books remove these freedoms. The e-book does not “belong” to you: all you have is a licence to use it in ways that have been approved by the publisher…

At the end of the piece I am described as “a commentator on the internet”, which is a bit grand. All references to the Observer have mysteriously disappeared!

SNARFing your email

Er, according to MIT’s Technology Review, Microsoft Research has released a program which prioritises the contents of your inbox depending on how close you are to the sender. The (free) download is called SNARF, for Social Network and Relationship Finder. It runs alongside Microsoft Outlook (2002 and newer versions), poring through e-mail histories and following chains of communications to ferret out the unread messages it deems most important.

SNARF measures a sender’s importance based on two key factors: the number and frequency of messages sent and received. The program then sorts unread e-mails into three fields: messages where the user is listed in the To or CC fields, group e-mails, and all messages received in the last week. SNARF lists messages by senders, rather than subject lines, and puts a user’s most important correspondents on top.

“We’re just counting e-mails,” one member of the development team said. “Some people might call it a brain-dead algorithm, but the messages you send someone is a pretty good proxy for how well you know people,” he says. “It can be very detailed.”

Would you like fries with that download?

The New Scientist reported, and the NYT followed up on, a Disney patent application which could lead to McDonald’s Happy Meal toys being replaced with portable media players that hold Disney movies, music, games or photos. Users could add files to the devices by earning points with food purchases. The NYT says:

The plan could work something like this: A customer enters a restaurant and buys a meal, receiving the portable media player and an electronic code that authorizes a partial download of a movie, video or other media file, which can be downloaded while in the restaurant, according to a United States Patent and Trademark Office application filed by Disney. Then, with each subsequent return, the customer earns more downloadable data, eventually getting an entire movie or game.

The report also claims that McDonalds has been kitting out its premises with wireless Internet connections since 2003, and since then has installed Wi-Fi in more than 6,200 restaurants worldwide. It charges customers for Wi-Fi usage and trades promotional coupons and prepaid cards for network access time.

I really must get out more. On second thoughts, perhaps not.

Wikipedia and QA

I’ve been following the arguments about the quality of Wikipedia entries and came on this thoughtful post by Ethan Zuckerman. Excerpt:

When I use Wikipedia to research technical topics, I generally have a positive experience, frequently finding information I would be unlikely to find in any other context, generally resolving my technical questions – “How does the GSM cellphone standard work?” with a single search. When I use Wikipedia to obtain information that I could find in a conventional encyclopedia, I often have a terrible experience, encountering articles that are unsatisfying at best and useless at worst. Generally, these experiences result from a search where I already know a little about a topic and am looking for additional, specific information, usually when I’m researching a city or a nation to provide context for a blog entry. My current operating hypothesis? Wikipedia is a fantastic reference work for stuff that doesn’t exist in other reference works, and a lousy knock-off of existing works when they do exist.

Old media and the Net

The most interesting question is not whether Friends Reunited will save ITV, but if ITV will destroy Friends Reunited. That depends on the extent to which Allen and his management team leave their acquisition alone.

Television people are constitutionally incapable of dealing with the web because they have been socially and professionally conditioned in the world of ‘push’ media with its attendant control freakery and inbuilt assumptions about the passivity and stupidity of audiences. Very little of their experience or skills are useful in a ‘pull’ medium like the web, where the consumer is active, fickle and informed, and history to date suggests that if they are put in charge of internet operations they screw up.

My guess is that Allen & Co will not be able to resist the temptation to meddle with their new toy…

The $100 laptop

This morning’s Observer column

There is something about Professor Nicholas Negroponte which reminds me of the Old Testament. Genesis, 27:11 to be precise: ‘And Jacob said to Rebekah his mother, Behold Esau my brother is an hairy man, and I am an smooth man’.

Negroponte is indeed an exceedingly smooth man. He circles the globe (Business class or better, naturally) consulting heads of governments and captains of industry. He is always impeccably dressed, a fluent and persuasive presenter, and invariably leaves his listeners with the impression that not only does he have an ace up his sleeve but that the almighty put it there.

Until recently, his main claim to fame was that he founded the MIT Media Lab, a legendary institution in which smart kids are paid to explore wacky ideas. His latest Big Idea is a cheap laptop that would be given to poor children in developing countries, thereby ending the digital divide…

Update: If you think I’m unduly sceptical, see here.

And there’s a pretty scathing critique by Lee Felsenstein here.

Some thoughtful comments here.

The Flickr phenomenon

This morning’s Observer column

Virtually every Tom, Dick and Harry has a digital camera. And if he doesn’t, there’s probably one in his mobile phone. Which raises an interesting question: what are people doing with all these cameras? The answer: snapping everything that moves, and much that doesn’t.

But then what? At this point, options begin to narrow. You can take the storage card into Jessops, push it into a slot and pay to have your photos printed. You can upload them to your computer and view them on screen in tasteful little slideshows, perhaps to the accompaniment of a track from your music library.

You can buy an inkjet printer, pay through the nose for paper and ink cartridges, and print them out. Or you can upload them to a printing service like Ofoto or Fotango, have them deduct money from your credit card and send back nice prints on proper photographic paper.

Alternatively you can put them on Flickr (www.flickr.com). If you don’t know about Flickr, it’s time you did…

Posted in Web

The Web: bigger than we know. Bigger than we can know?

From Search Engine Watch

A new survey has made an attempt to measure how much information exists outside of the search engines’ reach. The company behind the survey is also offering up a solution for those who want tap into this “hidden” material.

The study, conducted by search company BrightPlanet, estimates that the inaccessible part of the web is about 500 times larger than what search engines already provide access to. To put that another way, Google currently claims to have indexed or know about 1 billion web pages, making it the largest crawler-based search engine, based on reported numbers. Using Google as a benchmark, that means BrightPlanet would estimate there are about 500 billion pages of information available on the web, and only 1/500 of that information can be reached via traditional search engines.

Hmmm… That was written in 2000. When it stopped bragging about the number of pages it had indexed, Google was claiming over 8 billion. Let me see, that’s 8 billion by 500, er 4,000 billion pages. Pardon me while I go and lie down in a darkened room. I wonder if Tim Berners-Lee realised what kind of monster he was unleashing when he dreamed up the Web.

Posted in Web