Archive for the 'Google' Category

Making algorithms responsible for what they do

[link] Sunday, June 29th, 2014

This morning’s Observer column:

Just over a year ago, after Edward Snowden’s revelations first hit the headlines, I participated in a debate at the Frontline Club with Sir Malcolm Rifkind, the former foreign secretary who is now MP for Kensington and Chelsea and chairman of the intelligence and security committee. Rifkind is a Scottish lawyer straight out of central casting: urbane, witty, courteous and very smart. He’s good on his feet and a master of repartee. He’s the kind of guy you would be happy to have to dinner. His only drawback is that everything he knows about information technology could be written on the back of a postage stamp in 96-point Helvetica bold…

Read on

A borderless world?

[link] Tuesday, June 24th, 2014

One of my mantras is that for the first 20 years of its existence (up to 1993) cyberspace was effectively a parallel universe to what John Perry Barlow called ‘Meatspace’ (aka the real world). The two universes had very little to do with one another, and were radically different in all kinds of ways. But from 1993 (when Andreessen and Bina released Mosaic, the first big web browser) onwards, the two universes began to merge, which led to the world we have now — a blended universe which has the affordances of both cyberspace and Meatspace. This is why it no longer makes sense to distinguish (as politicians still do sometimes) between the Internet and the “real world”. And it’s also why we are having so much trouble dealing with a universe in which the perils of normal life are turbocharged by the affordances of digital technology.

This morning, I came on a really interesting illustration of this. It’s about how Google Maps deal with areas of the world where there are border disputes. Turns out that there are 32 countries in the world for which Google regards the border issue as problematic. And it has adopted a typical Google approach to the problem: the borders drawn on Google’s base map of a contested area will look different depending on where in the world you happen to be viewing them from.

An example: the borders of Arunachal Pradesh, an area administered by India but claimed as a part of Tibet by China. The region is shown as part of India when viewed from an Indian IP address, as part of China when viewed from China, and as distinct from both countries when viewed from the US.

There’s a nice animation in the piece. Worth checking out.

Yay! Gmail to get end-to-end encryption

[link] Wednesday, June 4th, 2014

This has been a long time coming — properly encrypted Gmail — but it’s very welcome. Here’s the relevant extract from the Google security blog:

Today, we’re adding to that list the alpha version of a new tool. It’s called End-to-End and it’s a Chrome extension intended for users who need additional security beyond what we already provide.

“End-to-end” encryption means data leaving your browser will be encrypted until the message’s intended recipient decrypts it, and that similarly encrypted messages sent to you will remain that way until you decrypt them in your browser.

While end-to-end encryption tools like PGP and GnuPG have been around for a long time, they require a great deal of technical know-how and manual effort to use. To help make this kind of encryption a bit easier, we’re releasing code for a new Chrome extension that uses OpenPGP, an open standard supported by many existing encryption tools.

However, you won’t find the End-to-End extension in the Chrome Web Store quite yet; we’re just sharing the code today so that the community can test and evaluate it, helping us make sure that it’s as secure as it needs to be before people start relying on it. (And we mean it: our Vulnerability Reward Program offers financial awards for finding security bugs in Google code, including End-to-End.)

Once we feel that the extension is ready for primetime, we’ll make it available in the Chrome Web Store, and anyone will be able to use it to send and receive end-to-end encrypted emails through their existing web-based email provider.

We recognize that this sort of encryption will probably only be used for very sensitive messages or by those who need added protection. But we hope that the End-to-End extension will make it quicker and easier for people to get that extra layer of security should they need it.

Cars as services, not possessions?

[link] Sunday, June 1st, 2014

This morning’s Observer column.

We now know that the implications of the driverless cars’ safety record were not lost on Google either. Last week the company rolled out its latest variation on the autonomous vehicle theme. This is a two-seater, pod-like vehicle which scoots around on small wheels. It looks, in fact, like something out of the Enid Blyton Noddy stories. The promotional video shows a cheery group of baby-boomers summoning these mobile pods using smartphones. The pods whizz up obligingly and stop politely, waiting to be boarded. The folks get in, fasten their seatbelts and look around for steering wheel, gear shift, brake pedals etc.

And then we come to the punchline: none of these things exist on the pod! Instead there are two buttons, one marked “Start” and the other marked “Stop”. There is also a horizontal computer screen which doubtless enables these brave new motorists to conduct Google searches while on the move. The implications are starkly clear: Google has decided that the safest things to do is to eliminate the human driver altogether.

At this point it would be only, er, human to bristle at the temerity of these geeks. Who do they think they are?

Read on

Google privacy ruling: the thin end of a censorship wedge?

[link] Sunday, May 18th, 2014

This morning’s Observer column.

Sooner or later, every argument about regulation of the internet comes down to the same question: is this the thin end of the wedge or not? We saw a dramatic illustration last week when the European court of justice handed down a judgment on a case involving a Spanish lawyer, one Mario Costeja González, who objected that entering his name in Google’s search engine brought up embarrassing information about his past (that one of his properties had been the subject of a repossession)…

Read on

LATER

Three interesting — and usefully diverse — angles on the ECJ decision.

  • Daithi Mac Sitigh points out that the decision highlights the tensions between EU and US law. “This is particularly significant”, he says, “given that most of the major global players in social networking and e-commerce operate out of the US but also do a huge amount of business in Europe.”

Google’s first line of defence was that its activities were not subject to the Data Protection Directive. It argued that its search engine was not a business carried out within the European Union. Google Spain was clearly subject to EU law, but Google argued that it sells advertising rather than running a search engine.

The court was asked to consider whether Google might be subject to the Directive under various circumstances. A possible link was the use of equipment in the EU, through gathering information from EU-based web servers or using relevant domain names (such as google.es). Another suggestion was that a case should be brought at its “centre of gravity”, taking into account where the people making the requests to delete data have their interests.

But the court never reached these points. Instead, it found the overseas-based search engine and the Spain-based seller of advertising were “inextricably linked”. As such, Google was found to be established in Spain and subject to the directive.

The message being sent was an important one. Although this ruling is specific to the field of data protection, it suggests that if you want to do business in the EU, a corporate structure that purports to shield your activities from EU law will not necessarily protect you from having to comply with local legislation. This may explain the panicked tone of some of the reaction to the decision.

  • In an extraordinary piece, “Right to Forget a Genocide”, Zeynep Tufekci muses about how (Belgian) colonial imposition of ID cards on Rwandan citizens was instrumental in facilitating genocide.

It may seem like an extreme jump, from drunken adolescent photos to genocide and ethnic cleansing, but the shape, and filters, of a society’s memory is always more than just about individual embarrassment or advancement. What we know about people, and how easily we can identify or classify them, is consequential far beyond jobs and dates, and in some contexts may make the difference between life and death.

“Practical obscurity”—the legal term for information that was available, but not easily—has died in most rich countries within just about a decade. Court records and criminal histories, which were only accessible to the highly-motivated, are now there at the click of a mouse. Further, what is “less obscure” has greatly expanded: using our online data, algorithms can identify information about a person, such as sexual orientation and political affiliation, even if that person never disclosed them.

In that context, take Rwanda, a country many think about in conjunction with the horrific genocide 20 years ago during which more than 800,000 people were killed—in just about one hundred days. Often, stories of ethnic cleansing and genocide get told in a context of “ancient hatreds,” but the truth of it is often much uglier, and much less ancient. It was the brutal colonizer of Rwanda, Belgium, that imposed strict ethnicity-based divisions in a place where identity tended to be more fluid and mixed. Worse, it imposed a national ID system that identified each person as belonging to Hutu, Tutsi or Twa and forever freezing them in that place. [For a detailed history of the construction of identity in Rwanda read this book, and for the conduct of colonial Belgium, Rwanda’s colonizer, read this one.]

Few years before the genocide, some NGOs had urged that Rwanda “forget” ethnicity, erasing them from ID cards.

They were not listened to.

During the genocide, it was those ID cards that were asked for at each checkpoint, and it was those ID cards that identified the Tutsis, most of whom were slaughtered on the spot. The ID cards closed off any avenue of “passing” a checkpoint. Ethnicity, a concept that did not at all fit neatly into the region’s complex identity configuration, became the deadly division that underlined one of the 20th century’s worst moments. The ID cards doomed and fueled the combustion of mass murder.

  • Finally, there’s a piece in Wired by Julia Powles arguing that “The immediate reaction to the decision has been, on the whole, negative. At best, it is reckoned to be hopelessly unworkable. At worst, critics pan it as censorship. While there is much to deplore, I would argue that there are some important things we can gain from this decision before casting it roughly aside.”

What this case should ideally provoke is an unflinching reflection on our contemporary digital reality of walled gardens, commercial truth engines, and silent stewards of censorship. The CJEU is painfully aware of the impact of search engines (and ‘The’ search engine, in particular). But we as a society should think about the hard sociopolitical problems that they pose. Search engines are catalogues, or maps, of human knowledge, sentiments, joys, sorrows, and venom. Silently, with economic drivers and unofficial sanction, they shape our lives and our interactions.

The fact of the matter here is that if there is anyone that is up to the challenge of respecting this ruling creatively, Google is. But if early indications are anything to go by, there’s a danger that we’ll unwittingly save Google from having to do so, either through rejecting the decision in practical or legal terms; through allowing Google to retreat “within the framework of their responsibilities, powers and capabilities” (which could have other unwanted effects and unchecked power, by contrast with transparent legal mechanisms); or through working the “right to be forgotten” out of law through the revised Data Protection Regulation, all under the appealing but ultimately misguided banner of preventing censorship.

There is, Powles argues, a possible technical fix for this — implementation of a ‘right to reply’ in search engine results.

An all-round better solution than “forgetting”, “erasure”, or “take-down”, with all of the attendant issues with free speech and the rights of other internet users, is a “right to reply” within the notion of “rectification”. This would be a tech-enabled solution: a capacity to associate metadata, perhaps in the form of another link, to any data that is inaccurate, out of date, or incomplete, so that the individual concerned can tell the “other side” of the story.

We have the technology to implement such solutions right now. In fact, we’ve done a mock-up envisaging how such an approach could be implemented.

Search results could be tagged to indicate that a reply has been lodged, much as we see with sponsored content on social media platforms. Something like this, for example:

Forgotten

(Thanks to Charles Arthur for the Tufekci and Powles links.)

What happens when algorithms decide what should be passed on?

[link] Sunday, May 4th, 2014

One of the things we’re interested in on our research project is how rumours, news, information (and mis-information) can spread with astonishing speed across the world as a result of the Internet. Up to now I had been mostly working on the assumption that the fundamental mechanism involved is always something like the ‘retweet’ in Twitter — i.e. people coming on something that they wanted to pass on to others for whatever reason. So human agency was the key factor in viral retransmission of memes.

But I’ve just seen an interesting article in the Boston Globe which suggests that we need to think of the ‘retweeting’ effect in wider terms.

A surprise awaited Facebook users who recently clicked on a link to read a story about Michelle Obama’s encounter with a 10-year-old girl whose father was jobless.

Facebook responded to the click by offering what it called “related articles.” These included one that alleged a Secret Service officer had found the president and his wife having “S*X in Oval Office,” and another that said “Barack has lost all control of Michelle” and was considering divorce.

A Facebook spokeswoman did not try to defend the content, much of which was clearly false, but instead said there was a simple explanation for why such stories are pushed on readers. In a word: algorithms.

The stories, in other words, apparently are selected by Facebook based on mathematical calculations that rely on word association and the popularity of an article. No effort is made to vet or verify the content.

This prompted a comment from my former Observer colleague, Emily Bell, who now runs the Tow Center at Columbia. “They have really screwed up,” she told the Globe. “If you are spreading false information, you have a serious problem on your hands. They shouldn’t be recommending stories until they have got it figured out.”

She’s right, of course. A world in which algorithms decided what was ‘newsworthy’ would be a very strange place. But we might get find ourselves living in such a world, because Facebook won’t take responsibility for its algorithms, any more than Google will take responsibility for YouTube videos. These companies want the status of common carriers because otherwise they have to assume legal responsibility for the messages they circulate. And having to check everything that goes through their servers is simply not feasible.

Why Facebook and Google are buying into drones

[link] Sunday, April 20th, 2014

This morning’s Observer column.

Back in the bad old days of the cold war, one of the most revered branches of the inexact sciences was Kremlinology. In the west, newspapers, thinktanks and governments retained specialists whose job was to scrutinise every scrap of evidence, gossip and rumour emanating from Moscow in the hope that it would provide some inkling of what the Soviet leadership was up to. Until recently, this particular specialism had apparently gone into terminal decline, but events in Ukraine have led to its urgent reinstatement.

The commercial equivalent of Kremlinology is Google- and Facebook-watching. Although superficially more open than the Putin regime, both organisations are pathologically secretive about their long-term aspirations and strategies. So those of us engaged in this strange spectator-sport are driven to reading stock-market analysts’ reports and other ephemera, which is the technological equivalent of consulting the entrails of recently beheaded chickens.

It’s grisly work but someone has to do it, so let us examine what little we know and see if we can make any sense of it…

LATER: Seb Schmoller, struck by my puzzlement about why Facebook had bought Oculus Rift, sent me a link to an interesting blog post by Donald Clark, who has experience of using Oculus kit.

I’ve played around with the Oculus for some time now – played games, roared around several roller-coasters, had my head chopped off by a guillotine, walked around on the floor of the ocean looking up at a whale and shark, floated around the International Space Station using my rocket pack.
Why do I think it matters? It’s possible, just possible, that this device, or one like it, will change the world we know forever. It will certainly revolutionise the world of entertainment. Flat screen TVs have got as big and sharp as they can get. It is clear that most people do want that big, panoramic experience but there’s a limit with 2D. Climb into that screen, which is what the Oculus allows you to do and you can look around, upwards, over your shoulder. You can them move around, do things and things can be done to you. It’s mind blowing.

The problem that Oculus has is getting to market quickly. Kickstarter was fine, for starting. Sony is right on their shoulder with project Morpheus. With this money they can accelerate R&D, have a massive marketing push and keep the price right…

His conclusion:

This is not only a ‘game’ changer, it’s an experience changer. It will change the way we spend our time, expand our experience and acquire skills. I’ve seen the effect it has with children, teenagers, adults and pensioners. It’s an experience, even at low resolution that can change your life, as you know, when you’ve tried it that it’s coming and when it comes it will be all-embracing. Facebook already has the world at its feet with 1.5 billion users, it now has the world on its head.

Translation: maybe the acquisition make more sense than I though.

The military-information complex, updated

[link] Wednesday, March 26th, 2014

In my Observer column last Sunday I contrasted the old military-industrial complex that so worried President Eisenhower with the emerging military-information complex (the core of which consists of the four Internet giants: Google, Facebook, Yahoo and Microsoft). What I should have guessed is that the two complexes are beginning to merge.

Consider, for example, this interesting Pando Daily piece by Yasha Levine, which says, in part:

Last week, I detailed how Google does much more than simply provide us civvies with email and search apps. It sells its tech to enhance the surveillance operations of the biggest and most powerful intel agencies in the world: NSA, FBI, CIA, DEA and NGA — the whole murky alphabet soup.

In some cases — like the company’s dealings with the NSA and its sister agency, the NGA — Google deals with government agencies directly. But in recent years, Google has increasingly taken the role of subcontractor: selling its wares to military and intelligence agencies by partnering with established military contractors. It’s a very deliberate strategy on Google’s part, allowing it to more effectively sink its hooks into the nepotistic, old boy government networks of America’s military-intelligence-industrial complex.

Over the past decade, Google Federal (as the company’s D.C. operation is called) has partnered up with old school establishment military contractors like Lockheed Martin, as well as smaller boutique outfits — including one closely connected to the CIA and former mercenary firm, Blackwater.

This approach began around 2006.

Around that time, Google Federal began beefing up its lobbying muscle and hiring sales reps with military/intelligence/contractor work experience — including at least one person, enterprise manager Jim Young, who used to work for the CIA. The company then began making the rounds, seeking out partnerships with with established military contractors. The goal was to use their deep connections to the military-industrial complex to hard sell Google technology.

Don’t you just love that corporate moniker: Google Federal! So now we have a tripartite complex: military-industrial-information.

The antisocial side of geek elitism

[link] Sunday, January 12th, 2014

This morning’s Observer column.

Just under a year ago, Rebecca Solnit, a writer living in San Francisco, wrote a sobering piece in the London Review of Books about the Google Bus, which she viewed as a proxy for the technology industry just down the peninsula in Palo Alto, Mountain View and Cupertino.

“The buses roll up to San Francisco’s bus stops in the morning and evening,” she wrote, “but they are unmarked, or nearly so, and not for the public. They have no signs or have discreet acronyms on the front windshield, and because they also have no rear doors they ingest and disgorge their passengers slowly, while the brightly lit funky orange public buses wait behind them. The luxury coach passengers ride for free and many take out their laptops and begin their work day on board; there is of course Wi-Fi. Most of them are gleaming white, with dark-tinted windows, like limousines, and some days I think of them as the spaceships on which our alien overlords have landed to rule over us.”

Google’s robotics drive

[link] Sunday, December 29th, 2013

This morning’s Observer column.

You may not have noticed it, but over the past year Google has bought eight robotics companies. Its most recent acquisition is an outfit called Boston Dynamics, which makes the nearest thing to a mechanical mule that you are ever likely to see. It’s called Big Dog and it walks, runs, climbs and carries heavy loads. It’s the size of a large dog or small mule – about 3ft long, 2ft 6in tall, weighs 240lbs, has four legs that are articulated like an animal’s, runs at 4mph, climbs slopes up to 35 degrees, walks across rubble, climbs muddy hiking trails, walks in snow and water, carries a 340lb load, can toss breeze blocks and can recover its balance when walking on ice after absorbing a hefty sideways kick.

You don’t believe me? Well, just head over to YouTube and search for “Boston Dynamics”. There, you will find not only a fascinating video of Big Dog in action, but also confirmation that its maker has a menagerie of mechanical beasts, some of them humanoid in form, others resembling predatory animals. And you will not be surprised to learn that most have been developed on military contracts, including some issued by Darpa, the Defence Advanced Research Projects Agency, the outfit that originally funded the development of the internet.

Should we be concerned about this? Yes, but not in the way you might first think…

Read on…