The ad-blocking paradox

This morning’s Observer column:

Mail Online is one of the world’s most popular news websites and it’s free: no paywall. But my browser has a plug-in program called Ghostery, which will scan any web page you visit and tell you how many “third-party trackers” it has found on it. These are small pieces of code that advertisers and ad-brokers place on pages or in cookies in order to monitor what you’re doing on the web and where you’ve been before hitting the current page.

When I looked at the Mail Online report, Ghostery found 31 such trackers. Some of them came from familiar names (Google, Amazon, Facebook, Pinterest, Doubleclick). But others were placed by outfits I have never heard of, for example, Bidswitch, Brightcove, Crimtan, Sonobi, Taboola. These are companies that act as high-speed intermediaries between your browser and firms wanting to place ads on the web page you’re viewing. And theirs is the industry that pays the bills (and sometimes makes a profit) for the publisher whose “free” content you are perusing.

But we humans are cussed creatures. It turns out that we loathe and detest online ads and will do almost anything to avoid them…

Read on

Fifty years on

Fifty years ago this month, many of us wondered if we were on the brink of nuclear Armageddon as the Kennedy Administration confronted the Soviet Union over the latter’s stationing of nuclear missiles in Cuba. The way JFK and his colleagues handled the crisis is probably the most studied case-study in crisis management in history — see, for example, The Kennedy Tapes: Inside the White House During the Cuban Missile Crisis, but it’s still fascinating.

To mark the anniversary, the JFK Memorial Library has put together a remarkable web production which not only contains an excellent narrative of the evolution and resolution of the crisis, but also a riveting portfolio of documents, photographs, movies and audio recordings of the secret deliberations of Kennedy and his advisers. It takes time to absorb, but it’s worth it. And it’s a brilliant illustration of what the Web can do if used imaginatively.

Clang!

I’m not a games geek (nor a swordsman either), but I might just pledge some money for this, partly because I admire Neal Stephenson (remember his wonderful essay, “In the Beginning Was the Command Line”?) and partly because this is a clever, beautifully-executed video too. Thanks to Neil Davidson for the link.

Kickstarter page here.

Posted in Web

Web design and page obseity

My Observer column last Sunday (headlined “Graphics Designers are Ruining the Web”) caused a modest but predictable stir in the design community. The .Net site published an admirably balanced round-up of comments from designers pointing out where, in their opinions, I had got things wrong. One (Daniel Howells) said that I clearly “had no exposure to the many wonderful sites that leverage super-nimble, lean code that employ almost zero images” and that I was “missing the link between minimalism and beautiful designed interfaces.” Designer and writer Daniel Gray thought that my argument was undermined “by taking a shotgun approach to the web and then highlighting a single favoured alternative, as if the ‘underdesigned’ approach of Peter Norvig is relevant to any of the other sites he discusses”.

There were several more comments in that vein, all reasonable and reasoned — a nice demonstration of online discussion at its best. Reflecting on them brought up several thoughts:

  • The Columnist’s Dilemma: writing a column about technology for a mainstream newspaper means that one is always trying to balance the temptation to go into technical detail against the risk of losing the non-technical reader. Sometimes I get the balance wrong. In this particular case I thought that getting into the pros and cons of, say, using Javascript to enhance usability, would obscure the main point I was trying to make, which is that there is an epidemic of obesity in web pages, and that that has some downsides.
  • There’s the question of what are columnists for? I remember something that Alan Rusbridger, the Editor of the Guardian, said when asked why he employed columnists like Simon Jenkins who annoyed the (mainly left-of-centre) readers of the paper. The essence of Rusbridger’s response, as I remember it, was that he needed to avoid creation of an echo-chamber — a publication in which readers only received views with which they agreed. Grit in the oyster, if you like. So perhaps one of the responsibilities of a columnist is to be provocative.
  • One thing I wish I had mentioned is that it isn’t just designers who are responsible for data-intensive web pages: it’s the slot-in advertising that is often the culprit. And here the responsibility for obesity lies with e-commerce. And here the column links to an earlier one, picking up Evgeny Morozov’s point about the way in which the Web has moved from being a cabinet of curiosities to an endless shopping mall.
  • The most common response to the column, though, was a casual shrug. So what if web pages are getting bigger and bigger? Network bandwidth will increase to meet the demand — and this may be a good thing: look at the way the demands of desktop publishing and, later, image and video editing pushed the development of personal computing technology. And of course there’s something in that argument: without the constant pressure to push the envelope, technology stagnates. The problem with that argument, however, is that for many Internet users bandwidth is not infinite. I don’t know what proportion of UK users in rural areas, for example, have a landline broadband connection that generally exceeds 2Mbps, but it sure as hell isn’t 100. And as more and more people access the Net via mobile connections, then bandwidth constraints really matter, and will continue to do so for the foreseeable future.
  • Thanks to Seb Schmoller for the .Net link.

    From web pages to bloatware

    This morning’s Observer column.

    In the beginning, webpages were simple pages of text marked up with some tags that would enable a browser to display them correctly. But that meant that the browser, not the designer, controlled how a page would look to the user, and there’s nothing that infuriates designers more than having someone (or something) determine the appearance of their work. So they embarked on a long, vigorous and ultimately successful campaign to exert the same kind of detailed control over the appearance of webpages as they did on their print counterparts – right down to the last pixel.

    This had several consequences. Webpages began to look more attractive and, in some cases, became more user-friendly. They had pictures, video components, animations and colourful type in attractive fonts, and were easier on the eye than the staid, unimaginative pages of the early web. They began to resemble, in fact, pages in print magazines. And in order to make this possible, webpages ceased to be static text-objects fetched from a file store; instead, the server assembled each page on the fly, collecting its various graphic and other components from their various locations, and dispatching the whole caboodle in a stream to your browser, which then assembled them for your delectation.

    All of which was nice and dandy. But there was a downside: webpages began to put on weight. Over the last decade, the size of web pages (measured in kilobytes) has more than septupled. From 2003 to 2011, the average web page grew from 93.7kB to over 679kB.

    Quite a few good comments disagreeing with me. In the piece I mention how much I like Peter Norvig’s home page. Other favourite pages of mine include Aaron Sloman’s, Ross Anderson’s and Jon Crowcroft’s. In each case, what I like is the high signal-to-noise ratio.

    The School of Data

    Here’s a fantastic initiative by the Open Knowledge Foundation. (Disclosure: I’m on the OKF’s Advisory Board). What lies behind it is an awareness that there’s a huge — and growing — skills gap in data-analysis, visualisation, etc.

    To address this growing demand, the Open Knowledge Foundation and P2PU are collaborating to create the School of Data.

    The School of Data will adopt the successful peer-to-peer learning model established by P2PU and Mozilla in their ‘School of Webcraft’ partnership. Learners will progress by taking part in ‘learning challenges’ – series of structured, achievable tasks, designed to promote collaborative and project-based learning.

    As learners gain skills, their achievements will be rewarded through assessments which lead to badges. Community support and on-demand mentoring will also be available for those who need it.

    So What Next?

    In order to get the School of Data up and running, the next challenges are:

    To create a series of learning challenges for a Data Wrangling 101 course. Developing Data wranglers will learn to find, retrieve, clean, manipulate, analyze, and represent different types of data.

    To recruit community leaders to act as ‘mentors’, providing community support and on-demand mentoring for those who need it.

    To curate, update and extend the existing manuals and reference materials, e.g. the Open Data Handbook and the Data Patterns Handbook etc.

    To design and implement assessments which evaluate achievements. Badges can then be issued which recognize the relevant skills and competencies.

    To openly license all education content (challenges, manuals, references and materials) so that anyone can use, modify and re-use it, including instructors and learners in formal education.

    Get the word out! Promote Data Wrangling 101 to potential participants.

    Get Involved!

    At this stage, the OKF is seeking volunteers to help develop the project. Whether you would like to design educational materials, construct learning challenges, donate money or mentor on the course, we’d love to hear from you! Equally, if you are part of an organisation which would like to join with the Open Knowledge Foundation and P2PU to collaborate on the School of Data, please do get in touch by registering on the form at the end of the link.

    The ideas man

    I’ve long been an addict of Edge.org, the website/salon founded by John Brockman. I finally got to interview him for the Observer.

    To say that John Brockman is a literary agent is like saying that David Hockney is a photographer. For while it’s true that Hockney has indeed made astonishingly creative use of photography, and Brockman is indeed a successful literary agent who represents an enviable stable of high-profile scientists and communicators, in both cases the description rather understates the reality. More accurate ways of describing Brockman would be to say that he is a “cultural impresario” or, as his friend Stewart Brand puts it, an “intellectual enzyme”. Brand goes on helpfully to explain that an enzyme is “a biological catalyst – an adroit enabler of otherwise impossible things”.

    The first thing you notice about Brockman, though, is the interesting way he bridges CP Snow’s “Two Cultures” – the parallel universes of the arts and the sciences. When profilers ask him for pictures, one he often sends shows him with Andy Warhol and Bob Dylan, no less. Or shots of the billboard photographs of his head that were used to publicise an eminently forgettable 1968 movie. But he’s also one of the few people around who can phone Nobel laureates in science with a good chance that they will take the call.

    The cookie monster cometh

    This morning’s Observer column.

    Needless to say, this intrusion of EU red tape into Britons’ ancient right to do as they damn well please generated much heated commentary. The jackbooted thugs of Brussels were, we were told, going to “kill the internet”. But the law is the law and, alarmed by the lack of preparedness of British industry, the government negotiated a year-long “lead-in period” to give businesses time to adapt to the new reality.

    We’re now midway through that period, and the information commissioner – the guy who will have to enforce the new rules – has just issued a half-term report on how things are going. His verdict, he writes, “can be summed up by the schoolteacher’s favourite clichés: ‘could do better’ and ‘must try harder’.”

    Why the Web might be a transient

    As I observed the other day, one of the things that drove me to write From Gutenberg to Zuckerberg was exasperation at the number of people who thought the Web is the Internet. In lecturing about this I developed a provocative trope in which I said that, although the Web is huge, in 50 years time we may see it as just a blip in the evolution of the Net. This generally produced an incredulous reaction.

    So it’s interesting to see Joe Hewitt arguing along parallel lines. Unlike me, he suggests a process by which the Web might be sidelined. “The arrogance of Web evangelists is staggering”, he writes.

    They take for granted that the Web will always be popular regardless of whether it is technologically competitive with other platforms. They place ideology above relevance. Haven’t they noticed that the world of software is ablaze with new ideas and a growing number of those ideas are flat out impossible to build on the Web? I can easily see a world in which Web usage falls to insignificant levels compared to Android, iOS, and Windows, and becomes a footnote in history. That thing we used to use in the early days of the Internet.

    My prediction is that, unless the leadership vacuum is filled, the Web is going to retreat back to its origins as a network of hyperlinked documents. The Web will be just another app that you use when you want to find some information, like Wikipedia, but it will no longer be your primary window. The Web will no longer be the place for social networks, games, forums, photo sharing, music players, video players, word processors, calendaring, or anything interactive. Newspapers and blogs will be replaced by Facebook and Twitter and you will access them only through native apps. HTTP will live on as the data backbone used by native applications, but it will no longer serve those applications through HTML. Freedom of information may be restricted to whatever our information overlords see fit to feature on their App Market Stores.

    I hope he’s wrong and given that he’s a serious and successful Apps developer he has an axe to grind. But his blog makes one think…

    WikiLeaks: five expert opinions

    The New York Times has a thoughtful set of contributions from various experts on the significance of the WikiLeaks disclosures.

    Evgeny Morozov, a Stanford scholar who has a book about the “dark side of Internet freedom” coming out in January, ponders the likelihood that WikiLeaks can be duplicated, and finds it unlikely.

    A thousand other Web sites dedicated to leaking are unlikely to have the same effect as WikiLeaks: it would take a lot of time and effort to cultivate similar relationships with the media. Most other documents leaked to WikiLeaks do not carry the same explosive potential as candid cables written by American diplomats.

    One possible future for WikiLeaks is to morph into a gigantic media intermediary — perhaps, even something of a clearing house for investigative reporting — where even low-level leaks would be matched with the appropriate journalists to pursue and report on them and, perhaps, even with appropriate N.G.O.’s to advocate on their causes. Under this model, WikiLeaks staffers would act as idea salesmen relying on one very impressive digital Rolodex.

    Ron Deibert from the University of Toronto thinks that the “venomous furor” surrounding WikiLeaks, including charges of “terrorism” and calls for the assassination of Julian Assange, has to rank as “one of the biggest temper tantrums in recent years”.

    Many lament the loss of individual privacy as we leave digital traces that are then harvested and collated by large organizations with ever-increasing precision. But if individuals are subject to this new ecosystem, what would make anyone think governments or organizations are immune? Blaming WikiLeaks for this state of affairs is like blaming a tremor for tectonic plate shifts.

    Certainly a portion of that anger could be mitigated by the conduct of WikiLeaks itself. The cult of personality around Assange, his photoshopped image now pasted across the WikiLeaks Web site, only plays into this animosity. So do vigilante cyberattacks carried out by supporters of WikiLeaks that contribute to a climate of lawlessness and vengeance seeking. If everyone can blast Web sites and services with which they disagree into oblivion — be it WikiLeaks or MasterCard — a total information war will ensue to the detriment of the public sphere.

    An organization like WikiLeaks should professionalize and depersonalize itself as much as possible. It should hold itself to the highest possible ethical standards. It should act with the utmost discretion in releasing into the public domain otherwise classified information that comes its way only on the basis of an obvious transgression of law or morality. This has not happened.

    Ross Anderson, who is Professor of Security Engineering at Cambridge and the author of the standard textbook on building dependable distributed information systems, thinks that the WikiLeaks saga shows how governments never take an architectural view of security.

    Your medical records should be kept in the hospital where you get treated; your bank statements should only be available in the branch you use; and while an intelligence analyst dealing with Iraq might have access to cables on Iraq, Iran and Saudi Arabia, he should have no routine access to information on Korea or Zimbabwe or Brazil. But this is in conflict with managers’ drive for ever broader control and for economies of scale.

    The U.S. government has been unable to manage this trade-off, leading to regular upsets and reversals of policy. Twenty years ago, Aldrich Ames betrayed all the C.I.A.’s Russian agents; intelligence data were then carefully compartmentalized for a while. Then after 9/11, when it turned out that several of the hijackers were already known to parts of the intelligence community, data sharing was commanded. Security engineers old enough to remember Ames expected trouble, and we got it.

    What’s next? Will risk aversion drive another wild swing of the pendulum, or might we get some clearer thinking about the nature and limits of power?

    James Bamford, a writer and documentary producer specializing in intelligence and national security issues, thinks that the WikiLeaks disclosures are useful in forcing governments to confess.

    A generation ago, government employees with Communist sympathies worried security officials. Today, after years of torture reports, black sites, Abu Ghraib, and a war founded on deception, it is the possibility that more employees might act out from a sense of moral outrage that concerns officials.

    There may be more employees out there willing to leak, they fear, and how do you weed them out? Spies at least had the courtesy to keep the secrets to themselves, rather than distribute them to the world’s media giants. In a sense, WikiLeaks is forcing the U.S. government into the confessional, with the door wide open. And confession, though difficult and embarrassing, can sometimes cleanse the soul.

    Fred Alford is Professor of Government at the University of Maryland and thinks that neither the Web operation WikiLeaks, nor its editor-in-chief, Julian Assange, is a whistle-blower.

    Whistle-blowers are people who observe what they believe to be unethical or illegal conduct in the places where they work and report it to the media. In so doing, they put their jobs at risk.

    The whistle-blower in this case is Bradley Manning, an United States Army intelligence analyst who downloaded a huge amount of government classified information, which was made public by WikiLeaks. Whether or not Manning’s act serves the greater public interest is a contentious issue, but he has been arrested and charged with unlawful disclosure of classified data.

    Some have compared the role of WikiLeaks to that of The New York Times in the publication of the Pentagon Papers several decades ago. WikiLeaks is the publishing platform that leverages the vast and instantaneous distribution capacity of the Internet.

    The WikiLeaks data dump challenges a long held belief by many of us who study whistle-blowing — that it is important that the whistle-blower have a name and face so that the disclosures are not considered just anonymous griping, or possibly unethical activity. The public needs to see the human face of someone who stands up and does the right thing when none of his or her colleagues dare.

    But he also thinks that “for better and worse, this changes whistle-blowing as we’ve known it.”