Making algorithms accountable

pasquale-book

As the world becomes increasingly driven by algorithms that are, effectively, ‘black boxes’, issues of responsibility, liability and accountability are becoming acute. Two researchers — Nicholas Diakopoulos of the University of Maryland, College Park and Sorelle Friedler from Data & Society are proposing five principles that might be helpful. They are:

Responsibility. “For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change”.

Explainability. “Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public.”

Accuracy “The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked.”

Auditability “The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm… While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance.”

Fairness “All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained.”

Not rocket science, but useful. What I like about this work is that it adds value. We all know by now that algorithmic decision-making is problematic. The next step is to figure out what to do about it, given that algorithms are here to stay.

Facebook’s (shirked) editorial responsibilities – contd.

The story continues. This from today’s Guardian:

The scrutiny over Facebook’s treatment of editorial content has been intensifying for months, reflecting the site’s unrivaled power and influence in distributing news alongside everything else its users share on the site.

Fake or misleading news spreads like wildfire on Facebook because of confirmation bias, a quirk in human psychology that makes us more likely to accept information that conforms to our existing world views. The conspiracy theories are also amplified by a network of highly partisan media outlets with questionable editorial policies, including a website called the Denver Guardian peddling stories about Clinton murdering people and a cluster of pro-Trump sites founded by teenagers in Veles, Macedonia, motivated only by the advertising dollars they can accrue if enough people click on their links.

The Pew Research Center found that 62% of Americans get all or some of their news from social media, of which Facebook accounts for the lion’s share. Yet an analysis by BuzzFeed found that 38% of posts shared on Facebook by three rightwing politics sites included “false or misleading information”, while three large leftwing pages did so 19% of the time.

LATER This from Buzzfeed:

“If someone is right-wing, and all their friends are right-wing, and that is the news they share on Facebook, then that is the bubble they have created for themselves and that is their right,” said the longtime Facebook engineer. “But to highlight fake news articles in the [news] feed, to promote them so they get millions of shares by people who think they are real, that’s not something we should allow to happen. Facebook is getting played by people using us to spread their bullshit.”

Spot on. That’s the key to it.

Facebook: still growing

From today’s New York Times:

The social network on Wednesday reached the latest milestones in its quest to dominate the world, topping 1.79 billion monthly visitors as of the end of September, up 16 percent from a year ago. Facebook also added a record number of new daily users and said for the first time that more than one billion people regularly used its network exclusively on their mobile device every month.

And those numbers do not even include Facebook’s other properties, such as the photo-sharing service Instagram and the messaging service WhatsApp.

Facebook’s user growth defies the usual trajectories for social media companies, which often start strong out of the gate and then sharply slow down. Twitter, which added four million new visitors last quarter, now serves a user base roughly one-sixth the size of Facebook’s. Snapchat, while popular among young users, has about 150 million daily users, about half as many as Twitter…

Facebook, Instagram and Twitter help cops to track minorities

From Technology review:

Our love of social media makes it easy for us to be spied on—so could we just use it less? An investigation by the American Civil Liberties Union reveals that Facebook, Twitter, and Instagram supplied police in Ferguson and Baltimore with data that was used to track minorities. The companies packaged up and provided data from public posts to a company called Geofeedia, which analyzes digital content to provide surveillance information to law enforcement agencies. The companies have now cut off, or at least modified, their supply of data—but it’s a reminder of how we all, perhaps unwittingly, enable a surveillance society. Spying as a result of digitizing our lives isn’t a new phenomenon, but it’s getting worse because we’re all so keen to connect. Much of the data is public, too, so simply banning police access won’t work. Tristan Harris, an ex-Googler, has an idea, borne out of a desire to be less beholden to the smartphone, that could ease the problem by encouraging us to step back from Facebook et al. He wants to introduce new criteria, standards, and even a Hippocratic oath for software designers to stop apps from being so addictive. If we can wean ourselves off social media even a little, its power for spying could, perhaps, be commensurately diminished.

How times change

chartoftheday_5403_most_valuable_companies_2006_vs_2016_n

“Last Friday, for a brief period of time, Apple, Alphabet (Google), Microsoft, Facebook and Amazon were the five most valuable publicly traded companies in the world, relegating ExxonMobile, the long-time leader in this category to a sixth place. And even though the oil company has since clawed its way back into the Top 5, the composition of the list clearly illustrates how important the digital economy has become over the past few years.

Ten years ago, the list of most valuable companies was dominated by big oil and multinational conglomerates. These days, it’s companies such as Google, Facebook and Amazon dominating the headlines.”

Source

Digital Dominance: forget the ‘digital’ bit

Some reflections on the symposium on “Digital Dominance: Implications and Risks” held by the LSE Media Policy Project on July 8, 2016.

In thinking about the dominance of the digital giants1 we are ‘skating to where the puck has been’ rather than to where it is headed. It’s understandable that scholars who are primarily interested in questions like media power, censorship and freedom of expression should focus on the impact that these companies are having on the public sphere (and therefore on democracy). And these questions are undoubtedly important. But this focus, in a way, reflects a kind of parochialism that the companies themselves do not share. For they are not really interested in our information ecosystem per se, nor in democracy either, if it comes to that. They have bigger fish to fry.

How come? Well, there are two reasons. The first is that although those of us who work in media and education may not like to admit it, our ‘industries’ are actually pretty small beer in industrial terms. They pale into insignificance compared with, say, healthcare, energy or transportation. Secondly, surveillance capitalism, the business model of the two ‘pure’ digital companies — Google and Facebook — is probably built on an unsustainable foundation, namely the mining, processing, analysis and sale of humanity’s digital exhaust. Their continued growth depends on a constant increase in the supply of this incredibly valuable (and free) feedstock. But if people, for one reason or another, were to decide that they would prefer to be doing something other than incessantly checking their phones, Googling or updating their social media statuses, then the evaporation of those companies’ stock market valuations would be a sight to behold. And while one can argue that such an outcome seems implausible, because of network effects and other factors, then a glance at the history of the IT industry might give you pause for thought.

The folks who run these companies understand this. For if there is one thing that characterizes the leaders of Google and Facebook it is their determination to take the long, strategic view. This is partly a matter of temperament, but it is powerfully boosted by the way their companies are structured: the founders hold the ‘golden shares’ which ensures their continued control, regardless of the opinions of Wall Street analysts or ordinary shareholders. So if you own Google or Facebook stock and you don’t like what Larry Page or Mark Zuckerberg are up to, then your only option is to dispose of your shares.

Being strategic thinkers, these corporate bosses are positioning their organizations to make the leap from the relatively small ICT industry into the much bigger worlds of healthcare, energy and transportation. That’s why Google, for example, has significant investments in each of these sectors. Underpinning these commitments is an understanding that their unique mastery of cloud computing, big data analytics, sensor technology, machine learning and artificial intelligence will enable them to disrupt established industries and ways of working in these sectors and thereby greatly widen their industrial bases. So in that sense mastery of the ‘digital’ is just a means to much bigger ends. This is where the puck is headed.

So, in a way, Martin Moore’s comparison2 of the digital giants of today with the great industrial trusts of the early 20th century is apt. But it underestimates the extent of the challenges we are about to face, for our contemporary versions of these behemoths are likely to become significantly more powerful, and therefore even more worrying for democracy.


  1. Or GAFA — Google, Apple, Facebook, Amazon — as our Continental friends call them, incorrectly in my view: Apple and Amazon are significantly different from the two ‘pure’ digital outfits. 

  2. Tech Giants and Civic Power, King’s College London, 2016. 

Algorithmic power — and bias

This morning’s Observer column:

[In the 1960s] the thought that we would one day live in an “information society” that was comprehensively dependent on computers would have seemed fanciful to most people.

But that society has come to pass, and suddenly the algorithms that are the building blocks of this world have taken on a new significance because they have begun to acquire power over our everyday lives. They determine whether we can get a bank loan or a mortgage, and on what terms, for example; whether our names go on no-fly lists; and whether the local cops regard one as a potential criminal or not.

To take just one example, judges, police forces and parole officers across the US are now using a computer program to decide whether a criminal defendant is likely to reoffend or not. The basic idea is that an algorithm is likely to be more “objective” and consistent than the more subjective judgment of human officials. The algorithm in question is called Compas (Correctional Offender Management Profiling for Alternative Sanctions). When defendants are booked into jail, they respond to a Compas questionnaire and their answers are fed into the software to generate predictions of “risk of recidivism” and “risk of violent recidivism”.

It turns out that the algorithm is fairly good at predicting recidivism and less good at predicting the violent variety. So far, so good. But guess what? The algorithm is not colour blind…

Read on

Common sense on algorithmic power

Wonderful OpEd piece by Zeynep Tufecki on the current row about Facebook’s supposed ideological bias. I particularly like this:

With algorithms, we don’t have an engineering breakthrough that’s making life more precise, but billions of semi-savant mini-Frankensteins, often with narrow but deep expertise that we no longer understand, spitting out answers here and there to questions we can’t judge just by numbers, all under the cloak of objectivity and science.

Well worth reading in full.