On November 7, 2017 I made a short presentation to the AI Caucus event on AI and ethics, which is summarized in this blog.
Erik Brynjolfsson and Andrew McAfee, two best-selling MIT business school professors, call AI and machine learning “the most important general-purpose technology of our era.”
I think that’s right. And it’s not just Facebook and Google; SIIA members use it to provide personalized education services and advanced business intelligence services. SIIA’s Diane Pinto does a weekly blog on AI developments: locating Anne Frank’s betrayer, fighting cancer, post-hurricane insurance response, detecting counterfeit goods. From farming to pharmaceuticals. From AI-controlled autonomous vehicle to clinical decision support software. It is everywhere.
The technology will make us collectively wealthier and more capable of providing for human welfare, human rights, human justice and the fostering of the virtues we need to live well in communities. We should welcome it and do all that we can to promote it.
As with any new technology, there are challenges. Work is one of them. Will robots take all the jobs?
Last year SIIA released an issue brief on AI and the Future of Work meant to deal with the possibility that our economy will need less human labor. In the past, we’ve always been able to increase production while maintaining full employment in the long run. Doing this is not easy or automatic. It takes investment in education and training and there’s always a short-term dislocation.
That’s how we shifted from an agricultural economy to a manufacturing and a service one. In the 20th century, we pioneered the notion of universal education to deal with that enormous transition, and created the world’s most literate and numerate workforce. It was a progressive, far-sighted approach that still pays dividends today.
But with machines moving up the value chain from muscles and brawn to cognitive and emotional tasks, from routine tasks that can be replaced using old-style “if-then” computer programming to non-routine tasks that can be cheaply and easily performed by machine learning system, this time it might be different.
Policymakers need to think about ways to focus the technology on human-centered automation and make the investments needed to ensure that humans are trained to work well with machines and are capable of performing the many new tasks that have not been automated and won’t be, at least in our lifetime.
There are ethical challenges beyond work. Will the new technologies be fair and transparent? Will the benefits be distributed to all? Will they reinforce existing inequalities?
Organizations that develop and use AI systems need ethical principles to guide them through the challenges that are already upon us and those that lie ahead. That’s what SIIA tried to do in “Ethical Principles for AI and Data Analytics,” an issue brief that we published and distributed at the event. It draws on the classical ethical traditions of rights, welfare, and virtue to enable organizations to examine their data practices carefully and to test their algorithms rigorously to ensure that they are doing what we want them to do. It surveys many of the ethical principles that have been developed to deal with complex moral situations, including the Belmont principles of beneficence, respect for persons and justice that were developed to deal with human subject experimentation in universities.
We need to recover our ability to think in ethical terms and it is our hope that these principles will be a practical, actionable guide.
Organizations need processes as well as principles to assess their AI-systems to ensure that the data and models used satisfy these ethical norms. The issue brief describes data and model governance programs that can be used for this purpose. And it applies these principles and processes to one important ethical challenge – ensuring that algorithms in use are fair and unbiased.
Ethical principles and processes will be key to aid companies to navigate the challenges ahead, but some have suggested going beyond that to a regulatory response.
There’s a place for some of that – in specific areas where problems are urgent and must be addressed in order to deploy the technology at all. Think of the need to understand liability for autonomous cars or to set a regulatory framework at the FDA for clinical decision support systems.
In general, we don’t know enough yet to regulate AI as such.
Moreover, we might never need to regulate AI as such. AI is a technique not a substantive enterprise. It is not one single thing; so, it is really unlikely that there could ever be a general approach or a federal agency regulating AI as such. It would be like have a regulatory agency for statistical analysis!
This does not mean the technology is unregulated, and its environment is like the Wild West. Current law and regulation still apply. There’s no get out of jail free card for using AI. It is not a defense for violating the law. You don’t escape liability under the fair lending or fair housing laws by explaining that you were using AI technology to discriminate.
There might be a need to adapt current law or regulation in specific cases where there’s a gap. For instance, the Consumer Financial Protection Bureau is looking right now at whether anything new needs to be done to address the use of alternative data and models that use machine learning techniques to assess credit worthiness.
Indeed, a useful thing for administrators and policymakers to think about is whether there is sufficient AI expertise in government agencies to ensure proper oversight of the new technology as it is being applied in areas within their jurisdiction.
In the meantime, organizations need guidance to adapt to the many ethical challenges they will face in bringing this technology to fruition. The principles of beneficence, respect for persons, justice and the fostering of virtues can provide a roadmap and some important guardrails for AI and advanced data analytics.