As the world becomes increasingly driven by algorithms that are, effectively, ‘black boxes’, issues of responsibility, liability and accountability are becoming acute. Two researchers — Nicholas Diakopoulos of the University of Maryland, College Park and Sorelle Friedler from Data & Society are proposing five principles that might be helpful. They are:
Responsibility. “For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change”.
Explainability. “Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public.”
Accuracy “The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked.”
Auditability “The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm… While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance.”
Fairness “All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained.”
Not rocket science, but useful. What I like about this work is that it adds value. We all know by now that algorithmic decision-making is problematic. The next step is to figure out what to do about it, given that algorithms are here to stay.