Yesterday, the House Oversight and Government Reform Committee’s Subcommittee on Information Technology held a hearing titled, “Game Changers: Artificial Intelligence Part III, Artificial Intelligence and Public Policy.” The purpose of this hearing was to hear from experts in the artificial intelligence (AI) space to examine the potential role for the government and private sector in addressing challenges posed by AI technology as well to consider the merits and costs of government solutions to some of these challenges. Overall, the hearing was very insightful and many of the panelists expressed similar views to SIIA.
While AI technology presents incredible potential, there are certain challenges that come along with its implementation. These challenges, such as ethical considerations, global competitiveness, privacy, and the future of work, have been examined many times by stakeholders. SIIA has also highlighted many areas in which the government can assist in addressing these challenges while facilitating the expansion of responsible AI use.
One of these challenges that SIIA and the hearing panelists identified is the future of work. Indeed, the rise of AI and automation has stirred fears among many that humans will rapidly be displaced from their jobs. However, while there will be some displacement, it is a mistake to believe that humans will no longer be necessary in the workplace. New jobs will be created in new fields as a result of AI and automation. As a result, there is currently and will continue to be high demand for skilled workers in data and computer science and engineering as AI continues to grow. Without these workers, the promise of AI and automation cannot fully be realized.
Here is where the government can step in. It can seek to expand STEM education in schools and should encourage companies to have training and re-training programs in place, both in the event that worker roles will be automated and also so American workers can continue working in tandem with new technology. Hearing panelist, Gary Shapiro, said, “We need to make sure that our workforce is prepared for these job in the future, and that means helping people whose jobs are displaced gain the skills that they need for new ones.”
Global competitiveness is also a challenge where the government can act. Just last year, China released its plan to transform itself into a major AI powerhouse by increasing investment in AI research and development, promoting public-private partnerships, and focusing on career and technical education. France recently unveiled its own AI strategy which provides incentives for investment in AI research and development, addresses the shortage of AI talent, and emphasizes ethics in the creation of new technologies. Just as these countries have placed a major focus on AI as a matter of policy, so too should the United States. Even though the United States is currently the world leader in AI, it needs to invest more in the research and development of AI and public-private partnerships if it wishes to remain competitive in the global marketplace.
As with other technologies, privacy is one of the biggest concerns with the implementation of AI. AI technology may be able to discern sensitive information from people using only information that those people have provided which can pose security risks. Panelist Ben Buchanan had serious concerns about this saying, “Much of the data used in machine learning systems is intensely personal, revealing, and appropriately private . . . There is a risk of breaches by hackers, of misuse by those who collect or store the data, and of secondary use, where data that is collected for one purpose and later re-appropriated for another.” Addressing this common concern, SIIA Senior Vice President for Public Policy, Mark MacCarthy, wrote in a CIO column, the government can encourage a privacy regime that “allows these beneficial uses of inferred information but encourages trust in the system by restricting information uses that pose a significant risk of harm.”
Finally, the government also has the opportunity to work with industry and other stakeholder groups to address ethical concerns surrounding AI. By having these discussions and identifying these areas, the government can help establish a framework for responsible AI use and develop norms and best practices. Regarding AI and norms, panelist Jack Clark said, “This unprecedented, rapidly developing, powerful technological capability brings with it unique threats that are worse than existing ones. Traditional arms control regimes or other policy tools are insufficient.” Clark said he feels that the government has an important role to play in filling certain gaps in existing laws.
As a guide, SIIA released ethical principles for AI and data analytics. Panelist Terah Lyons also said, “as technologies are applied in areas such as criminal justice, it is critical for the Partnership to raise and address concerns related the inevitable bias in data sets used to train algorithms. It is also critical for us to engage with those using such algorithms in the justice system so they understand the limits of these technologies and how they work.” It is also important to note that many existing laws are in place that government actors can utilize to address concerns of bias. For example, applying existing anti-discrimination laws can not only help eliminate negative consequences, but can also improve the accuracy of algorithms.
While AI continues to develop, more challenges will inevitably arise. It is important for government and industry to continue working together to identify and address these challenges especially in hearings like this one held by the House Oversight Committee. Additionally, in considering new regulations, government bodies need to weigh possible negative impacts to the numerous benefits of AI so as not to hinder innovation. We welcome and appreciate the time this Committee has taken in examining appropriate responses to AI challenges and look forward to the opportunity to work with Congress to address them.