There’s good news from the scholarly community working on the assessment of fairness in algorithms. Computer scientists and statisticians are developing a host of new measures of fairness aimed at providing companies, policymakers, advocates with new tools to assess fairness in different contexts.
The essential insight of the movement is that the field needs many different measures of fairness to capture the variety of normative concepts that are used in different business and legal contexts. Alexandra Chouldechova, Assistant Professor of Statistics and Public Policy at Heinz College, says “There is no single notion of fairness that will work for every decision context or for every goal.”
To find the right measure for the job at hand, she advises, “Start with the context in which you’re going to apply [your decision], and work backwards from there.”
This issue came to a head in the controversy surrounding the COMPAS score. This s ...
The public must be confident in the fairness of algorithms, or a backlash will threaten their very real and substantial benefits.
A recent article in the New York Times raises the question: do algorithms discriminate? The question is legitimate, but the emphasis is wrong. Instead of thinking of data analytics as a problem, we need to welcome the new opportunities for improved decisionmaking that they enable. And we need to cooperate to identify and address any disparate impacts data-driven decisionmaking might have.