Tech companies and ‘fractal irresponsibility’

Nice, insightful essay by Alexis Madrigal. Every new scandal is a fractal representation of the giant services that produce them:

On Tuesday, BuzzFeed published a memo from the outgoing Facebook chief security officer, Alex Stamos, in which he summarizes what the company needs to do to “win back the world’s trust.” And what needs to change is … well, just about everything. Facebook needs to revise “the metrics we measure” and “the goals.” It needs to not ship code more often. It needs to think in new ways “in every process, product, and engineering decision.” It needs to make the user experience more honest and respectful, to collect less data, to keep less data. It needs to “listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world.” It needs to deprioritize growth and change its relationship with its investors. And finally, Stamos wrote, “We need to be willing to pick sides when there are clear moral or humanitarian issues.” YouTube (and its parent company, Alphabet), Twitter, Snapchat, Instagram, Uber, and every other tech company could probably build a list that contains many of the same critiques and some others.

.People encountering problems online probably don’t think of every single one of these institutional issues when something happens. But they sense that the pattern they are seeing is linked to the fact that these are the most valuable companies in the world, and that they don’t like the world they see through those services or IRL around them. That’s what I mean by fractal irresponsibility: Each problem isn’t just one in a sequence, but part of the same whole.

Interesting also that facebook’s Chief Security Officer has left the company, and that his position is not going to be filled.

Google, Facebook and the power to nudge users

This morning’s Observer column:

Thaler and Sunstein describe their philosophy as “libertarian paternalism”. What it involves is a design approach known as “choice architecture” and in particular controlling the default settings at any point where a person has to make a decision.

Funnily enough, this is something that the tech industry has known for decades. In the mid-1990s, for example, Microsoft – which had belatedly realised the significance of the web – set out to destroy Netscape, the first company to create a proper web browser. Microsoft did this by installing its own browser – Internet Explorer – on every copy of the Windows operating system. Users were free to install Netscape, of course, but Microsoft relied on the fact that very few people ever change default settings. For this abuse of its monopoly power, Microsoft was landed with an antitrust suit that nearly resulted in its breakup. But it did succeed in destroying Netscape.

When the EU introduced its General Data Protection Regulation (GDPR) – which seeks to give internet users significant control over uses of their personal data – many of us wondered how data-vampires like Google and Facebook would deal with the implicit threat to their core businesses. Now that the regulation is in force, we’re beginning to find out: they’re using choice architecture to make it as difficult as possible for users to do what is best for them while making it easy to do what is good for the companies.

We know this courtesy of a very useful 43-page report just out from the Norwegian Consumer Council, an organisation funded by the Norwegian government…

Read on

Kremlinology 2.0

This morning’s Observer column:

In the bad old days of the cold war, western political and journalistic institutions practised an arcane pseudoscience called Kremlinology. Its goal was to try to infer what was going on in the collective mind of the Soviet Politburo. Its method was obsessively to note everything that could be publicly observed of the activities of this secretive cabal – who was sitting next to whom at the podium; which foreign visitors were granted an audience with which high official; who was in the receiving line for a visiting head of state; what editorials in Pravda (the official Communist party newspaper) might mean; and so on.

The Soviet empire is no more, much to Putin’s chagrin, but the world now has some new superpowers. We call them tech companies. Each periodically stages a major public event at which its leaders emerge from their executive suites to convey messages to their faithful followers and to the wider world. In the past few weeks, two such events have been held by two of the biggest powers – Google and Apple. So let’s do some Kremlinology on them…

Read on

Fixing Facebook: the only two options by a guy who knows how the sausage is made

James Fallows quotes from a fascinating email exchange he had with his friend Michael Jones, who used to work at Google (he was the company’s Chief Technology Advocate and later a key figure in the evolution of Google Earth):

So, how might FB fix itself? What might government regulators seek? What could make FaceBook likable? It is very simple. There are just two choices:

a. FB stays in its send-your-PII1-to-their-customers business, and then must be regulated and the customers validated precisely as AXCIOM and EXPERIAN in the credit world or doctors and hospitals in the HIPPA healthcare world; or,

b. FB joins Google and ALL OTHER WEB ADVERTISERS in keeping PII private, never letting it out, and anonymously connecting advertisers with its users for their mutual benefit.

I don’t get a vote, but I like (b) and see that as the right path for civil society. There is no way that choice (a) is not a loathsome and destructive force in all things—in my personal opinion it seems that making people’s pillow-talk into a marketing weapon is indeed a form of evil.

This is why I never use Facebook; I know how the sausage is made.


  1. PII = Personally Identifiable Information 

The ethics of working for surveillance capitalists

This morning’s Observer column:

In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of [Leo] Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.

Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?

These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront…

Read on

On not being evil

This morning’s Observer column:

The motto “don’t be evil” has always seemed to me to be a daft mantra for a public company, but for years that was the flag under which Google sailed. It was a heading in the letter that the two founders wrote to the US Securities and Exchange Commission prior to the company’s flotation on the Nasdaq stock market in 2004. “We believe strongly,” Sergey Brin and Larry Page declared, “that in the long term, we will be better served – as shareholders and in all other ways – by a company that does good things for the world even if we forgo some short-term gains. This is an important aspect of our culture and is broadly shared within the company.” Two years ago, when Google morphed into Alphabet – its new parent company – the motto changed. Instead of “don’t be evil” it became “do the right thing”.

Heartwarming, eh? But still a strange motto for a public corporation. I mean to say, what’s “right” in this context? And who decides? Since Google/Alphabet does not get into specifics, let me help them out. The “right thing” is “whatever maximises shareholder value”, because in our crazy neoliberal world that’s what public corporations do. In fact, I suspect that if Google decided that doing the right thing might have an adverse impact on the aforementioned value, then its directors would be sued by activist shareholders for dereliction of their fiduciary duty.

Which brings me to YouTube Kids…

Read on

The Technical is Political

This morning’s Observer column:

In his wonderful book The Swerve: How the Renaissance Began, the literary historian Stephen Greenblatt traces the origins of the Renaissance back to the rediscovery of a 2,000-year-old poem by Lucretius, De Rerum Natura (On the Nature of Things). The book is a riveting explanation of how a huge cultural shift can ultimately spring from faint stirrings in the undergrowth.

Professor Greenblatt is probably not interested in the giant corporations that now dominate our world, but I am, and in the spirit of The Swerve I’ve been looking for signs that big changes might be on the way. You don’t have to dig very deep to find them…

Read on

If at first you don’t succeed…

This morning’s Observer column:

There were just two problems with Glass. The first is that it made you look like a dork. Although Google teamed up with the company that made Ray-Bans, among other things, if you were wearing Glass then you became the contemporary version of those 1950s engineers who always had several pens and a propelling pencil in their top jacket pockets. The second problem was the killer one: Glass made everyone around you feel uneasy. They thought the technology was creepy, intrusive and privacy-destroying. Bouncers wouldn’t let wearers – whom they called “Glassholes” – into clubs. The maître d’ would discover that the table you thought you had booked was suddenly unavailable. And so on.

In the end, Google bit the bullet and withdrew the product in January 2015. Privacy advocates and fashionistas alike cheered. Technology had been put in its place. But if, like this columnist, you believe that technology has the potential to improve human lives, then your feelings were mixed…

Read on

How things change

The €2.4B fine on Google handed down by the European Commission stemmed originally from complaints by shopping-comparison sites that changes in Google Shopping that the company introduced in 2008 had amounted to an abuse of its dominance in search. But 2008 was a long time ago in this racket, and shopping-comparison sites have become relatively small beer because Internet users researching possible purchases don’t start with a search engine any more. (Many of them start with Amazon, for example.)

This is deployed (by the Internet giants) as an argument for the futility of trying to regulate behaviour by dominant firms: the legal process of investigation takes so long that the eventual ruling is so out of date as to be meaningless.

This is a convenient argument, but the conclusion isn’t that we shouldn’t regulate these monsters. Nevertheless it is interesting to see how the product search scene has changed over time, as this chart shows.

Source

The obvious solution to the time-lag problem is — as the Financial Times reported on January 3 — for regulators to have “powers to impose so-called “interim measures” that would order companies to stop suspected anti-competitive behaviour before a formal finding of wrongdoing had been reached.” At the moment the European Commission does have powers to impose such measures, but only if it can prove that a company is causing “irrevocable harm” — a pretty high threshold. The solution: lower the threshold.