Foresight, ignored

I’ve been reading “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, the original academic paper in which the co-founders of Google, Sergey Brin and Larry Page, outlined their search engine and its properties. It’s a fascinating read for various reasons, not least the evidence it presents of the pair’s originality. And at the end there are two Appendices, the first of which suggests an eerie prescience about the extent to which advertising would be a malignant business model for any enterprise aiming at objective search. Here it is:

Appendix A: Advertising and Mixed Motives

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is “The Effect of Cellular Phone Use Upon Driver Attention”, a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web [Page, 98]. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

Since it is very difficult even for experts to evaluate search engines, search engine bias is particularly insidious. A good example was OpenText, which was reported to be selling companies the right to be listed at the top of the search results for particular queries [Marchiori 97]. This type of bias is much more insidious than advertising, because it is not clear who “deserves” to be there, and who is willing to pay money to be listed. This business model resulted in an uproar, and OpenText has ceased to be a viable search engine. But less blatant bias are likely to be tolerated by the market. For example, a search engine could add a small factor to search results from “friendly” companies, and subtract a factor from results from competitors. This type of bias is very difficult to detect but could still have a significant effect on the market. Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline’s homepage when the airline’s name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

In today’s edition of his regular newsletter ‘Big’, Matt Stoeller reports something Rana Foroohar, author of Don’t Be Evil: the case against Big Tech (my review of which is here), said when he asked her what was the most surprising or weird thing she learned when working on her book. “I don’t know if it’s weird”, she replied,

but the most surprising thing I leaned while researching the book was that the founders of Google, Sergei and Larry, had basically predicted the key problems with surveillance capitalism and where they would lead us back in their original paper on search, written while they were Stanford grad students. At the very end, in the appendix, there’s a paragraph where they admit that the targeted advertising business model could be misused by companies or other entities in ways that would hurt users. This is kind of a bombshell revelation given that search engines say everything they do is for users. The fact that this paper hasn’t gotten more attention makes me think people aren’t reading….which is itself part of the problem of attention capture I describe in the book.

Matt’s book Goliath: The 100-year war between monopoly power and democracy is well worth a read, btw.

How “Don’t Be Evil” panned out

My Observer review of Rana Foroohar’s new book about the tech giants and their implications for our world.

“Don’t be evil” was the mantra of the co-founders of Google, Sergey Brin and Larry Page, the graduate students who, in the late 1990s, had invented a groundbreaking way of searching the web. At the time, one of the things the duo believed to be evil was advertising. There’s no reason to doubt their initial sincerity on this matter, but when the slogan was included in the prospectus for their company’s flotation in 2004 one began to wonder what they were smoking. Were they really naive enough to believe that one could run a public company on a policy of ethical purity?

The problem was that purity requires a business model to support it and in 2000 the venture capitalists who had invested in Google pointed out to the boys that they didn’t have one. So they invented a model that involved harvesting users’ data to enable targeted advertising. And in the four years between that capitulation to reality and the flotation, Google’s revenues increased by nearly 3,590%. That kind of money talks.
Sign up for Bookmarks: discover new books in our weekly email
Read more

Rana Foroohar has adopted the Google mantra as the title for her masterful critique of the tech giants that now dominate our world…

Read on

Fines don’t work. To control tech companies we have to hit them where it really hurts

Today’s Observer comment piece

If you want a measure of the problem society will have in controlling the tech giants, then ponder this: as it has become clear that the US Federal Trade Commission is about to impose a fine of $5bn (£4bn) on Facebook for violating a decree governing privacy breaches, the company’s share price went up!

This is a landmark moment. It’s the biggest ever fine imposed by the FTC, the body set up to police American capitalism. And $5bn is a lot of money in anybody’s language. Anybody’s but Facebook’s. It represents just a month of revenues and the stock market knew it. Facebook’s capitalisation went up $6bn with the news. This was a fine that actually increased Mark Zuckerberg’s personal wealth…

Read on

StreetView leads us down some unexpected pathways

This morning’s Observer column:

Street View was a product of Google’s conviction that it is easier to ask for forgiveness than for permission, an assumption apparently confirmed by the fact that most jurisdictions seemed to accept the photographic coup as a fait accompli. There was pushback in a few European countries, notably Germany and Austria, with citizens demanding that their properties be blurred out; there was also a row in 2010 when it was revealed that Google had for a time collected and stored data from unencrypted domestic wifi routers. But broadly speaking, the company got away with its coup.

Most of the pushback came from people worried about privacy. They objected to images showing men leaving strip clubs, for example, protesters at an abortion clinic, sunbathers in bikinis and people engaging in, er, private activities in their own backyards. Some countries were bothered by the height of the cameras – in Japan and Switzerland, for example, Google had to lower their height so they couldn’t peer over fences and hedges.

These concerns were what one might call first-order ones, ie worries triggered by obvious dangers of a new technology. But with digital technology, the really transformative effects may be third- or fourth-order ones. So, for example, the internet leads to the web, which leads to the smartphone, which is what enabled Uber. And in that sense, the question with Street View from the beginning was: what will it lead to – eventually?

One possible answer emerged last week…

Read on

Finally, a government takes on the tech companies

This morning’s Observer column:

On Monday last week, the government published its long-awaited white paper on online harms. It was launched at the British Library by the two cabinet ministers responsible for it – Jeremy Wright of the Department for Digital, Culture, Media and Sport (DCMS) and the home secretary, Sajid Javid. Wright was calm, modest and workmanlike in his introduction. Javid was, well, more macho. The social media companies had had their chances to put their houses in order. “They failed,” he declared. “I won’t let them fail again.” One couldn’t help feeling that he had one eye on the forthcoming hustings for the Tory leadership.

Nevertheless, this white paper is a significant document…

Read on

Google’s big move into ethics-theatre backfires.

This morning’s Observer column:

Given that the tech giants, which have been ethics-free zones from their foundations, owe their spectacular growth partly to the fact that they have, to date, been entirely untroubled either by legal regulation or scruples about exploiting taxation loopholes, this Damascene conversion is surely something to be welcomed, is it not? Ethics, after all, is concerned with the moral principles that affect how individuals make decisions and how they lead their lives.

That charitable thought is unlikely to survive even a cursory inspection of what is actually going on here. In an admirable dissection of the fourth of Google’s “principles” (“Be accountable to people”), for example, Prof David Watts reveals that, like almost all of these principles, it has the epistemological status of pocket lint or those exhortations to be kind to others one finds on evangelical websites. Does it mean accountable to “people” in general? Or just to Google’s people? Or to someone else’s people (like an independent regulator)? Answer comes there none from the code.

Warming to his task, Prof Watts continues: “If Google’s AI algorithms mistakenly conclude I am a terrorist and then pass this information on to national security agencies who use the information to arrest me, hold me incommunicado and interrogate me, will Google be accountable for its negligence or for contributing to my false imprisonment? How will it be accountable? If I am unhappy with Google’s version of accountability, to whom do I appeal for justice?”

Quite so. But then Google goes and doubles down on absurdity with its prestigious “advisory council” that “will consider some of Google’s most complex challenges that arise under our AI Principles, such as facial recognition and fairness in machine learning, providing diverse perspectives to inform our work”…

Read on

After I’d written the column, Google announced that it was dissolving its ethics advisory council. So we had to add this:

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity.

That still leaves the cynical absurdity of Google’s AI ‘principles’ to be addressed, though.

What makes a ‘tech’ company?

The Blackrock Blog points out that something strange is going on in the investment world.

MSCI and S&P are updating their Global Industry Classification Standards (GICS), a framework developed in 1999, to reflect major changes to the global economy and capital markets, particularly in technology.

Take Google, a company long synonymous with “tech” and internet software. Google parent Alphabet derives the bulk of its revenue from advertising, but also makes money from apps and hardware, and operates side ventures including Waymo, a unit that makes self-driving cars. Decisions about what makes a “tech” giant are not as simple as they once were.

The sector classification overhaul, set in motion last year, will begin in September and affect three of the 11 sector classifications that divide the global stock market. A newly created Communications Services sector will replace a grouping that is currently called Telecommunications Services. The new group will be populated by legacy Telecom stocks, as well as certain stocks from the Information Technology and Consumer Discretionary categories.

What does this mean?

Facebook and Alphabet will move from Information Technology to Communications Services in GICS-tracking indexes. Meanwhile, Netflix will move from Consumer Discretionary to Communications Services. None of what the media has dubbed the FANG stocks (Facebook, Amazon.com, Netflix and Google parent Alphabet) will be classified as Information Technology after the GICS changes, perhaps a surprise to those who think of internet innovation as “tech.” The same applies to China’s BAT stocks (Baidu, Alibaba Group and Tencent). All of these were Information Technology stocks before the changes; none will be after.

Or, in a tabular view:

This change is probably only significant for index funds, but still, it must rather dent the self-image of the ‘tech’ boys to be categorised as merely “communications services”!

Why, sooner or later, societies are going to have to rein in the tech giants

My OpEd piece in yesterday’s Observer:

Spool forward to the tragic case of Molly Russell, the 14-year-old who killed herself after exploring her depression on Instagram. When her family looked into her account, they found sombre material about depression and suicide. Her father said that he believed the Facebook-owned platform had “helped kill my daughter”. This prompted Matt Hancock, the health secretary, to warn social media platforms to “purge” material relating to self-harm and suicide or face legislation that would compel them to do so. In response, Instagram and Pinterest (another social media outfit) issued the standard bromides about how they were embarking on a “full review” of their policies etc.

So is Molly’s case a crisis or a scandal? You know the answer. Nothing much will change because the business models of the platforms preclude it. Their commercial imperatives are remorselessly to increase both the number of their users and the intensity of those users’ “engagement” with the platforms. That’s what keeps the monetisable data flowing. Tragedies such as Molly Russell’s suicide are regrettable (and of course have PR downsides) but are really just the cost of running such a profitable business.

Asking these companies to change their business model, therefore, is akin to “asking a giraffe to shorten its neck”, as Shoshana Zuboff puts it in her fiery new book, The Age of Surveillance Capitalism…

Read on

Google pays more in EU fines than it does in taxes

From The Inquirer

INTERNET GIANT Google now pays more in European fines than it does in taxes, the firm’s fourth-quarter earnings have revealed.

Google owner Alphabet company reported Q4 revenues up 22 per cent to $39.28bn, while annual revenues were up 23 per cent to $136.8bn.

The company also took the time to separate out “European Commission fines” in its consolidated statements of income in the company’s accounts. These increased from $2.7bn in 2017 to $5.1bn in 2018, with a further €50m already set to be added to the bill for its first quarter and 2019 accounts, thanks to French data protection authority CNIL.

That compares to a provision for income taxes of just $4.2 billion for 2018, or 12 per cent of its pre-tax income.

Shoshana Zuboff’s new book

Today’s Observer carries a five-page feature about Shoshana Zuboff’s The Age of Surveillance Capitalism consisting of an intro by me followed by Q&A between me and the author.

LATER Nick Carr has a perceptive review of the book in the LA Review of Books. John Thornhill also had a good long review in last Saturday’s Financial Times, sadly behind a paywall.