The application of big data analytics has already improved lives in innumerable ways. It has improved the way teachers instruct students, doctors diagnose and treat patients, lenders find creditworthy customers, financial service companies control money laundering and terrorist financing, and governments deliver services. It promises even more transformative benefits with self-driving cars and smart cities, and a host of other applications will drive fundamental improvements throughout society and the economy. Government policymakers have worked with developers and users of these advanced analytic techniques to promote and protect these publicly beneficial innovations, and they should continue to do so.
In many circumstances, current law and regulation provide an adequate framework for strong public protection. Most of the legal concerns that animate public discussions can be resolved through strong and vigorous enforcement of rules that apply to advanced and traditional analytical techniques alike. Using artificial intelligence and machine learning to discriminate against vulnerable groups is a violation of existing law, even if new technological means are used.
Moreover, new policy measures could be counterproductive, creating a substantial risk of overly-broad or overly specific rules, a lingering uncertainty as the ramifications of new rules are clarified, a prospect of significantly increased validation costs, a delay in the introduction of socially beneficial new services, and an organizational culture of passive compliance with external mandates rather than an openness to agile responses to rapidly changing circumstances and new challenges.
But organizations cannot rely on compliance with existing law alone as an adequate response to the ethical challenges of big data analytics. They must go beyond current law to respond fully to the ethical challenges that drive public concerns.
Organizations can meet their ethical obligations and persuade policymakers and the public that they are responsible users of data analytics only if they have policies and procedures in place for ethical review, publicly available ethical principles that they adhere to and a transparent communication program that allows them to describe in an accountable way their policies, procedures and principles.
The public oversight system in place in many jurisdictions throughout the world works through an active and vigorous advocacy community, scholarly research and media investigations to unearth and focus attention on problems. It relies on alert, informed regulators and policymakers who work with existing law to respond to problems and recommend changes when the limits of their competence and jurisdiction are reached. This oversight system functions effectively when organizations anticipate challenges and are prepared to react to these public pressures.
This issue brief focuses on a part of a framework of responsible data use, namely ethical principles that institutions could use to assess the data and models they use and to make modifications when they are needed. It seeks to further international and national public discussion among policymakers, organizations developing and using data and models, activists, scholars, ethicists, and civil society.
As the Privacy Shield discussions take place in Washington today between regulators from the European Union and United States government officials it is important to link the sustainability of this framework for cross-border data flows to a strong organizational culture of ethical data practices and accountability to the interests of all stakeholders involved in data practices. This SIIA issue brief seeks to deepen and develop the conversation about maintaining and enhancing such an organizational culture.
The SIIA Issue Brief, “Principles for Ethical Data Use,” can be found here.