Recent technological developments have led to the rise of “big data” analytics which include machine learning and artificial intelligence. These new technologies will without question provide ample opportunity for growth for consumers, businesses, and the global economy as a whole. As this technological evolution continues to take place, it does not come without some risk.
Over the last few years, algorithmic fairness has become an issue of serious debate. Most recently, Cathy O’Neil released a book titled, “Weapons of Math Destruction,” and Frank Pasquale published “The Black Box Society,” in which they look at issues of discrimination and the role that algorithms play in exacerbating discrimination. SIIA responded to these works in a blog by saying that tech leaders must quickly act to ensure algorithmic fairness.
To go even further, on Friday, November 04, 2016, SIIA released an issue brief on the topic of algorithmic fairness. It discusses the importance of a framework of responsible use in addressing these issues and developing potential solutions to problems.
First, the SIIA issue brief discusses changes in computer technology. It delves into the importance of data volume when applying analytic techniques. A larger volume of data leads to better analytic techniques as well as more reliable results with the minimization of unintended discrimination. In addition to talking about data sets, the brief elaborates on machine-learning techniques that are not “pre-programmed with humanly created rules” and “their operation can sometimes resist human comprehension.” Often these programs do not focus on predicting causality, but rather on correlations and relationships that can be found in data.
The brief also examines challenges to fairness. As previously discussed, the lack of predicting causality sometimes has unintended consequence. For example, concerns have been raised by civil rights groups about the disparate impact that such relationship predicting has and will continue to have on minority groups. Understandably, they worry that these techniques will be used to target minorities for discriminatory surveillance. The brief cites the example of former Attorney General Eric Holder questioning these techniques as the relied-on variables that are correlated with race such as education level, family circumstances, and employment history that have nothing to do with the crime.
So where do we go from here? The SIIA issue brief says that it is important for policymakers to establish some type of framework for responsible use. FTC Commissioner Terrell McSweeny called for a framework of “responsibility by design” that would test algorithms at the development stage for potential bias. The Obama Administration has even called for “equal opportunity by design.” In addition to these proposals, SIIA suggests that these should be supplemented with an internal assessment of algorithms in use to “ensure that properly designed algorithms continue to operate properly.”
It is important for policymakers to realize that the benefits of new data analytics tools are numerous and can be used in fair ways. It is imperative, though, that we realize that flawed outcomes are possible and design algorithms in a way that is fair to all.
The full issue brief can be found here.