November 20, 2024 marks the launch of the International Network of AI Safety Institutes convening in San Francisco. The gathering over the coming days underscores the importance of global collaboration and sustained funding to address the challenges posed by rapidly advancing AI technologies. With governments and industry stepping up to fund research and enhance cooperation, this global initiative marks a critical step toward safeguarding AI’s development and deployment. The initial members of the International Network of AI Safety Institutes are Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States. The stated goal of the convening, according to the U.S. Department of Commerce, is to kickstart the Network’s technical collaboration ahead of the AI Action Summit in Paris in February 2025.

New Initiatives Announced

During the launch, it was announced that the U.S., South Korea, and Australia joined forces with philanthropic organizations to commit over $11 million to advance AI safety research. The funding will focus on critical areas such as detecting synthetic content and mitigating its associated risks. The largest annual contribution comes from the U.S. Agency for International Development (USAID), which has allocated $3.8 million to support AI safety initiatives in partner countries abroad. South Korea has pledged $7.2 million over the next four years for research and development aimed at addressing synthetic content risks. Additional support is coming from Australia’s national science agency, the John S. and James L. Knight Foundation, the Omidyar Network, and the AI Safety Fund—an independent grant-making organization supported by leading AI companies including SIIA member company Google.

The U.S. AISI also announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which will be chaired by the U.S. AI Safety Institute and brings together partners from across the U.S. Government to identify, measure, and manage the emerging national security and public safety implications of AI. According to a release from NIST, the Taskforce will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more. Initial representation will be present from the following federal agencies:

  • the Department of Defense, including the Chief Digital and Artificial Intelligence Office (CDAO) and the National Security Agency;
  • the Department of Energy and ten of its National Laboratories;
  • the Department of Homeland Security, including the Cybersecurity and Infrastructure Security Agency (CISA); and
  • the National Institutes of Health (NIH) at the Department of Health and Human Services.

This effort is a catalyst for the whole-of-government approach to AI safety, as directed by the recent National Security Memorandum on AI, and is a starting point for more important work that can be built upon overtime to expand membership across the federal government.

Why This Matters: U.S. AISI’s Future Remains Unclear 

The International Network of AI Safety Institutes aims to align participating countries on best practices for testing and evaluating AI models. This week’s summit marks the network’s first official gathering, with the U.S. AI Safety Institute serving as its inaugural chair. However, the future of the U.S. AI Safety Institute, established within the National Institute of Standards and Technology (NIST) through President Biden’s 2023 AI Executive Order, remains uncertain. President-elect Donald Trump has pledged to repeal the executive order, putting the institute’s long-term prospects in jeopardy.

Despite widespread support, legislative efforts to formally establish the U.S. AI Safety Institute have yet to succeed in Congress. As the international community begins to align on shared AI safety priorities, the uncertain future of the U.S. AISI could present challenges to sustained leadership in this space.

SIIA and industry at-large has engaged in advocacy for authorizing a body in the federal government to lead a coordinated approach to AI safety and security issues with a focus on frontier models. This includes the Future of AI Innovation Act in the Senate and the AI Advancement and Reliability Act in the House which would allow the U.S. government to continue engaging in AI safety, thereby protecting national security and maintaining U.S. leadership in AI innovation. We believe authorizing this body is essential to avoid ceding global leadership to foreign jurisdictions, maintain strong relationships with U.S. firms, and preempt state legislation that may lead to onerous regulations and liability risk in the absence of federal regulation.

 

SIIA as a proud member of the AI Safety Consortium (AISIC) looks forward to continuing engagement with the U.S. AISI.