The Google Three

This morning”s Observer column about the Italian convictions handed out to three Googlers.

In the case of the Google Three, however, it’s likely that they will be vindicated because even if the Italian appeal fails, there is always the possibility of recourse to the European Court in Strasbourg, which will take the view that European Union law, as currently drafted, appears to give hosting providers a safe harbour from liability so long as they remove illegal content once they are notified of its existence. The downside of this, of course, is that Google will have to be much more responsive to complaints, which will make it much easier to have videos taken down because the prudent course will always be to “take down first and ask questions later”.

The glory days of YouTube may be coming to an end. And Silvio Berlusconi remains at large.

The Google problem

Christopher Caldwell is a terrific columnist. His FT essay this week — a meditation on the implications of the complaints lodged with the EU Commission this week by e-Justice, Foundem and Ciao! — is a model of thoughtful analysis, and a good reminder of why the weekend FT is such a good buy.

There is either a big problem with Google or there is none at all. If you believe that Google is engaged in open competition, many of the complaints against it look like sour grapes. The Initiative for a Competitive Online Marketplace, a Microsoft-funded study group, issued a white paper last summer on “Openness and the Internet” that, in one sense, is little more than a grab-bag of gripes. The paper notes that Google “operates in a manner that shields it from scrutiny by the other actors” (as if other businesses do not), that it is hard to gather data on Google advertising campaigns and to make them interoperable with other search engines (which is an inconvenience for Google’s advertisers but not a duty for Google), and that Google’s pricing policies are opaque (ditto). On the other hand, any of these failings under monopoly conditions would be a serious problem.

Google insists that its searches are neutral both in appearance and fact. Its “natural searches” – the ones that match up searchers with the sites they will most likely want to visit – are done through an “algorithm” that measures hundreds of variables, with no human intervention once the algorithm has been designed. But Google also carries “sponsored links” – advertisements – which appear alongside the natural-search results. Advertisers bid to be listed by Google any time a given word or phrase is searched for. And here the business gets more subjective. Google has an interest in making web-surfing pleasant and convenient. It gives “quality scores” – rankings based on attractiveness, ease of use and percentage of original content – to its bidders. So a website with a low “quality score” must bid more to be included. The nub of Foundem’s complaint is that its quality scores inexplicably fell, driving its cost per hit from 5p to £5. Was this because it is also a Google competitor?

Worth reading in full. I’ve written before that Google is “the next Microsoft” in the sense that it’s going to be the anti-trust problem of the next decade. But the issues that it raises will be much more subtle and complex than anything we had with Bill Gates & Co.

Hummer RIP

Cheer up! Here’s a piece of good news for a change.

General Motors is to wind down production of its gas-guzzling Hummer brand following the collapse of a deal to sell the business to a Chinese manufacturer.

The Detroit based company, which announced plans to offload Hummer last year as part of its efforts to focus on core brands such as Chevrolet, Buick and Cadillac, said that the proposed buyers, Sichuan Tengzhong Heavy Industrial Machines, had been unable to complete the acquisition.

As a result, GM said it would begin the “orderly wind-down” of the Hummer operations.

John Smith, GM vice president of corporate planning and alliances, said the group had considered a number of possibilities for Hummer and was disappointed that the deal with Tengzhong could not be completed.

Valuing Open Source software

From Slashdot:

“The Linux kernel would cost more than one billion EUR (about 1.4 billion USD) to develop in European Union. This is the estimate made by researchers from University of Oviedo (Spain), whereby the value annually added to this product was about 100 million EUR between 2005 and 2007 and 225 million EUR in 2008. Estimated 2008 result is comparable to 4% and 12% of Microsoft’s and Google’s R&D expenses on whole company products. Cost model ‘Intermediate COCOMO81’ is used according to parametric estimations by David Wheeler. An average annual base salary for a developer of 31,040 EUR was estimated from the EUROSTAT. Previously, similar works had been done by several authors estimating Red Hat, Debian, and Fedora distributions. The cost estimation is not of itself important, but it is an important means to and end: that commons-based innovation must receive a higher level of official recognition that would set it as an alternative to decision-makers. Ideally, legal and regulatory framework must allow companies participating on commons-based R&D to generate intangible assets for their contribution to successful projects. Otherwise, expenses must have an equitable tax treatment as a donation to social welfare.”

Thanks to Glyn Moody for spotting it.

DeadHead memories

My Observer column on Sunday about the perceptiveness of the Grateful Dead has triggered fond memories in some readers — and stimulated some lovely emails, including this one from a colleague:

In 1972 I was one of the organisers of a big music festival in a place called Bickershaw near Wigan. The Dead were top of the bill and during contract negotiations with them, we were amazed that we had to provide a central area to accommodate anyone who wanted to record their gig. They had realised as early as 1972 that they could give away poor quality recordings, knowing that many would then go out and buy the real thing. I believe they were the largest earners amongst R&R bands for many years. I hung out with Jerry Gracia for a bit and he was very stoned but also very smart.

An interesting footnote – the main festival organiser was one Jeremy Beadle. He wasn’t famous yet but had already started to assume his annoying persona. I think he was the only person at the festival who wasn’t stoned, but he was also very smart and went on to make made lots of money.

There’s a web site for the aforementioned festival too. Gosh! Those were the days.

Panton Principles launched!

The principles are:

  • Where data or collections of data are published it is critical that they be published with a clear and explicit statement of the wishes and expectations of the publishers with respect to re-use and re-purposing of individual data elements, the whole data collection, and subsets of the collection. This statement should be precise, irrevocable, and based on an appropriate and recognized legal statement in the form of a waiver or license. When publishing data make an explicit and robust statement of your wishes.
  • Many widely recognized licenses are not intended for, and are not appropriate for, data or collections of data. A variety of waivers and licenses that are designed for and appropriate for the treatment of data are described here. Creative Commons licenses (apart from CCZero), GFDL, GPL, BSD, etc are NOT appropriate for data and their use is STRONGLY discouraged. Use a recognized waiver or license that is appropriate for data.
  • The use of licenses which limit commercial re-use or limit the production of derivative works by excluding use for particular purposes or by specific persons or organizations is STRONGLY discouraged. These licenses make it impossible to effectively integrate and re-purpose datasets and prevent commercial activities that could be used to support data preservation. If you want your data to be effectively used and added to by others it should be open as defined by the Open Knowledge/Data Definition – in particular non-commercial and other restrictive clauses should not be used.
  • Furthermore, in science it is STRONGLY recommended that data, especially where publicly funded, be explicitly placed in the public domain via the use of the Public Domain Dedication and Licence or Creative Commons Zero Waiver. This is in keeping with the public funding of much scientific research and the general ethos of sharing and re-use within the scientific community.

    From Panton Principles.

  • So is the H.264 problem going to be solved?

    Interesting report in The Register about Google’s acquisition of On2, the company that developed the VP3 codec which is the basis for Ogg Theora.

    The question is still whether Google will turn around and open source On2’s video codecs. In announcing the original pact, Mountain View made a point of saying that it believes “high-quality video compression technology should be a part of the web platform” — and that On2 is a means of reaching that end.

    The major web browser makers – including Google, Apple, Mozilla, Opera, and Microsoft – have failed to agree on a single common codec for the new HTML5 video tag. The HTML5 spec allows for any codec, and while some have opted for the open and license-free Ogg Theora, others are sticking to the license-encumbered H.264 for reasons of performance, hardware support, and alleged patent anxiety.

    If you’re new to this, Charles Arthur wrote a helpful piece about it, following on a perceptive piece by Jack Schofield.