On November 7, 2017 I made a short presentation to the AI Caucus event on AI and ethics, which is summarized in this blog.
The application of big data analytics has already improved lives in innumerable ways. It has improved the way teachers instruct students, doctors diagnose and treat patients, lenders find creditworthy customers, financial service companies control money laundering and terrorist financing, and governments deliver services. It promises even more transformative benefits with self-driving cars and smart cities, and a host of other applications will drive fundamental improvements throughout society and the economy. Government policymakers have worked with developers and users of these advanced analytic techniques to promote and protect these publicly beneficial innovations, and they should continue to do so.
In many circumstances, current law and regulation provide an adequate framework for strong public protection. Most of the legal concerns that animate public discussions can be resolved through strong and vigorous enforcement of rules that apply to advanced and tradi ...
Institutions involved in predictive modeling are using ever more advanced techniques to predict outcomes of interest from credit scoring to facial recognition to spam detection. Institutions assess the performance of these models through standard measures such as accuracy (the number of correct predictions divided by the total number of predictions) or error rate (the number of incorrect predictions divided by the total number of predictions). They can in addition assess the fairness of their predictions with respect to vulnerable groups using measures such as predictive parity across groups, statistical parity, or equal error rates.
Institutions also face legal and ethical obligations to explain the basis of their consequential decisions to those who are affected, to regulators and to the general public. The idea is that people have rights based on autonomy and dignity to be able to understand why institutions make the decisions they do. When predictive models are ...