Fixing Facebook: the only two options by a guy who knows how the sausage is made

James Fallows quotes from a fascinating email exchange he had with his friend Michael Jones, who used to work at Google (he was the company’s Chief Technology Advocate and later a key figure in the evolution of Google Earth):

So, how might FB fix itself? What might government regulators seek? What could make FaceBook likable? It is very simple. There are just two choices:

a. FB stays in its send-your-PII1-to-their-customers business, and then must be regulated and the customers validated precisely as AXCIOM and EXPERIAN in the credit world or doctors and hospitals in the HIPPA healthcare world; or,

b. FB joins Google and ALL OTHER WEB ADVERTISERS in keeping PII private, never letting it out, and anonymously connecting advertisers with its users for their mutual benefit.

I don’t get a vote, but I like (b) and see that as the right path for civil society. There is no way that choice (a) is not a loathsome and destructive force in all things—in my personal opinion it seems that making people’s pillow-talk into a marketing weapon is indeed a form of evil.

This is why I never use Facebook; I know how the sausage is made.


  1. PII = Personally Identifiable Information 

The ethics of working for surveillance capitalists

This morning’s Observer column:

In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of [Leo] Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.

Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?

These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront…

Read on

On not being evil

This morning’s Observer column:

The motto “don’t be evil” has always seemed to me to be a daft mantra for a public company, but for years that was the flag under which Google sailed. It was a heading in the letter that the two founders wrote to the US Securities and Exchange Commission prior to the company’s flotation on the Nasdaq stock market in 2004. “We believe strongly,” Sergey Brin and Larry Page declared, “that in the long term, we will be better served – as shareholders and in all other ways – by a company that does good things for the world even if we forgo some short-term gains. This is an important aspect of our culture and is broadly shared within the company.” Two years ago, when Google morphed into Alphabet – its new parent company – the motto changed. Instead of “don’t be evil” it became “do the right thing”.

Heartwarming, eh? But still a strange motto for a public corporation. I mean to say, what’s “right” in this context? And who decides? Since Google/Alphabet does not get into specifics, let me help them out. The “right thing” is “whatever maximises shareholder value”, because in our crazy neoliberal world that’s what public corporations do. In fact, I suspect that if Google decided that doing the right thing might have an adverse impact on the aforementioned value, then its directors would be sued by activist shareholders for dereliction of their fiduciary duty.

Which brings me to YouTube Kids…

Read on

The Technical is Political

This morning’s Observer column:

In his wonderful book The Swerve: How the Renaissance Began, the literary historian Stephen Greenblatt traces the origins of the Renaissance back to the rediscovery of a 2,000-year-old poem by Lucretius, De Rerum Natura (On the Nature of Things). The book is a riveting explanation of how a huge cultural shift can ultimately spring from faint stirrings in the undergrowth.

Professor Greenblatt is probably not interested in the giant corporations that now dominate our world, but I am, and in the spirit of The Swerve I’ve been looking for signs that big changes might be on the way. You don’t have to dig very deep to find them…

Read on

If at first you don’t succeed…

This morning’s Observer column:

There were just two problems with Glass. The first is that it made you look like a dork. Although Google teamed up with the company that made Ray-Bans, among other things, if you were wearing Glass then you became the contemporary version of those 1950s engineers who always had several pens and a propelling pencil in their top jacket pockets. The second problem was the killer one: Glass made everyone around you feel uneasy. They thought the technology was creepy, intrusive and privacy-destroying. Bouncers wouldn’t let wearers – whom they called “Glassholes” – into clubs. The maître d’ would discover that the table you thought you had booked was suddenly unavailable. And so on.

In the end, Google bit the bullet and withdrew the product in January 2015. Privacy advocates and fashionistas alike cheered. Technology had been put in its place. But if, like this columnist, you believe that technology has the potential to improve human lives, then your feelings were mixed…

Read on

How things change

The €2.4B fine on Google handed down by the European Commission stemmed originally from complaints by shopping-comparison sites that changes in Google Shopping that the company introduced in 2008 had amounted to an abuse of its dominance in search. But 2008 was a long time ago in this racket, and shopping-comparison sites have become relatively small beer because Internet users researching possible purchases don’t start with a search engine any more. (Many of them start with Amazon, for example.)

This is deployed (by the Internet giants) as an argument for the futility of trying to regulate behaviour by dominant firms: the legal process of investigation takes so long that the eventual ruling is so out of date as to be meaningless.

This is a convenient argument, but the conclusion isn’t that we shouldn’t regulate these monsters. Nevertheless it is interesting to see how the product search scene has changed over time, as this chart shows.

Source

The obvious solution to the time-lag problem is — as the Financial Times reported on January 3 — for regulators to have “powers to impose so-called “interim measures” that would order companies to stop suspected anti-competitive behaviour before a formal finding of wrongdoing had been reached.” At the moment the European Commission does have powers to impose such measures, but only if it can prove that a company is causing “irrevocable harm” — a pretty high threshold. The solution: lower the threshold.

DeepMind or DeepMine?

This morning’s Observer column:

In July 2015, consultants working at the Royal Free hospital trust in London approached DeepMind, a Google-owned artificial intelligence firm that had no previous experience in healthcare, about developing software based on patient data from the trust. Four months later, the health records of 1.6 million identifiable patients were transferred to servers contracted by Google to process the data on behalf of DeepMind. The basic idea was that the company would create an app, called Streams, to help clinicians manage acute kidney injury (AKI), a serious disease that is linked to 40,000 deaths a year in the UK.

The first most people knew about this exciting new partnership was when DeepMind announced the launch of DeepMind Health on 24 February 2016…

Read on

Paranoia in the Valley

My Observer piece about US reaction to the Google fine:

The whopping €2.4bn fine levied by the European commission on Google for abusing its dominance as a search engine has taken Silicon Valley aback. It has also reignited American paranoia about the motives of European regulators, whom many Valley types seem to regard as stooges of Mathias Döpfner, the chief executive of German media group Axel Springer, president of the Federation of German Newspaper Publishers and a fierce critic of Google.

US paranoia is expressed in various registers. They range from President Obama’s observation in 2015 that “all the Silicon Valley companies that are doing business there [Europe] find themselves challenged, in some cases not completely sincerely. Because some of those countries have their own companies who want to displace ours”, to the furious off-the-record outbursts from senior tech executives after some EU agency or other has dared to challenge the supremacy of a US-based tech giant.

The overall tenor of these rants (based on personal experience of being on the receiving end) runs as follows. First, you Europeans don’t “get” tech; second, you don’t like or understand innovation; and third, you’re maddened by envy because none of you schmucks has been able to come up with a world-beating tech company…

Read on

Today was the day!

“Yes” is the answer to the question below. But the fine — $2.4B — is much bigger than anyone expected. So who, one wonders, was managing whose expectations?

Ironic, too, the the UK is planning to leave the only organisation in the world that appears to be capable of taking on the tech giants.