By Sharan Sawlani
Discourse around artificial intelligence (AI) policy continues to grow considerably, animated by attention to generative AI and large language models, facial recognition, algorithmic bias, and geopolitical competition. Across the United States, state legislators have been especially active. Dozens of bills have been introduced in 2023 alone. We have identified the following key legislative themes which help to make sense of what is happening at the state level.
- How AI is used in employment decisions. After last year’s joint report by the EEOC and Department of Justice, states around the country are paying close attention to the use of artificial intelligence and automated technology in the employment space. In Vermont, H 114 by State Rep. Presley (D) seeks to restrict the electronic monitoring and use of employment-related automated decision systems. In Texas, HB 3633, proposed by State Rep. Lalani (D), aims to establish a program by the Texas Workforce Commission to train individuals in skills related to AI systems. At the local level, we’re also tracking the New York City AI Bias Law (Local Law 144) that prohibits employers’ use of automated decision making tools to screen candidates for hiring or promotion unless a bias audit has been conducted.
- Transparency into automated decision making tools. While algorithmic bias has always been at the forefront of the AI policy landscape, several states are taking further steps to create guidelines around the use of automated decision tools. For example, AB 331 in California, SB 5356 in Washington, and the Stop Discrimination by Algorithms Act (B25-0114) in Washington D.C. all seek to increase the transparency and explainability surrounding automated decision tools. While the Washington bill is specific to the government procurement and use of ADM systems, the proposed legislation California and Washington D.C. would also cover private entities if passed.
- Generative AI. ChatGPT and other generative AI tools have become incredibly popular since the end of 2022 and have created a host of questions for legislators surrounding deep fakes and the misuse of this technology. As a result, states are taking different approaches to addressing this issue. In Illinois, the Artificial Intelligence Consent Act (HB 3285) would require AI content creators to provide a disclosure on the bottom of the image or video that the image or video is not authentic unless consented by the individual or group whose likeness is being used. Similarly, HB 721 in California hopes to create a working group to study the feasibility around digital content provenance guidelines and standards. Elsewhere, legislators in Massachusetts are looking to regulate the commercial use of large-scale generative AI models with S 31.
- AI in the mental health context. Finally, some states are looking to regulate the use of AI in providing mental health services. H 1974 in Massachusetts and HB 4695 in Texas create guidelines for when and how mental health service providers are allowed to use AI tools.
- More study of AI’s benefits and risks. Following advancements made in AI over the past year, states around the country are creating commissions or advisory councils to study, discuss, and audit the benefits and risks that come with the technology. Legislators in states like Hawaii (SR 123), Texas (HB 2060), California (AB 302) (Maryland (HB 1068 & HB 1034), Rhode Island (SB 117), and Connecticut (SB 1103) are hoping to establish general commissions to study AI risk. Other states like California (SB 398) are looking to do impact assessments within specific contexts (such as AI used within the state’s Department of Justice).