By Paul Lekas

Last week, as promised by President Trump’s January 23 executive order, the Office of Management and Budget (OMB) issued two memos on federal AI use and acquisition (M-25-21 and M-25-22) that rescind and replace memos issued during the Biden administration (M-24-10 and M-24-18). There’s a lot to say about the changes made by OMB Director Russell Vought, who signed the memos. In this piece, we highlight five key takeaways.

The tone has changed, but the core objectives remain.

Consistent with the Trump Administration’s early messaging on AI, new memos front innovation and efficient acquisition, respectively. This framing contrasts with the Biden-era memos, which emphasized governance and risk management as the starting point. But the shift in framing does not mean the Trump Administration is foregoing guardrails. Indeed, the new guidance reflects a recognition of the need to ensure that AI systems acquired and adopted by the federal government are reliable, cost effective, and fit for the government mission.

The focus on “high-impact AI systems” avoids the confusion of the “rights-impacting/safety-impacting” framework.

The new OMB memos provide clarity to what was a confusing approach to high-risk AI systems based on identifying those that are “safety-impacting” and those that are “rights-impacting.” OMB M-25-21 adopts a single category – “high-impact” – that better aligns with how companies and experts view AI systems that require extra oversight. The memo defines AI as “high-impact” “when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.” Further in the document, OMB provides as a more detailed definition of what is meant by decisions that fall into the high-impact category.

This new approach addresses concerns raised by SIIA in December 2023. At that time, we cautioned that the approach then under consideration (and later adopted) by OMB was too expansive, would cover many low-risk uses of AI, and would tap limited government resources. We urged a focus on high-risk use cases, which aligns to the “high-impact” approach of OMB M-25-21.

While the presumptive high-impact purposes set out in section 6 of the Appendix generally align to those identified in the Biden-era guidance, there are notable changes. Among these, as SIIA urged, is a removal of a category of education-related use cases. We had raised concerns that the broadbrush approach of OMB M-24-10 would have the effect of impeding the adoption of AI tools to advance educational objectives.

Revised, streamlined governance will reduce bureaucratic barriers and promote public trust.

In addition to collapsing the distinction between rights-impacting and safety-impacting AI, OMB M-25-21 also streamlines requirements for agency governance and risk management. The new guidance, as set out in section 4.b of the OMB M-25-21 Appendix, includes robust yet flexible requirements for pre-deployment testing, AI impact assessments, ongoing monitoring, and public feedback. Flexibility, however, is the key. These requirements are less prescriptive than those required by the Biden OMB. Notably, the new guidance removes a series of required “additional minimum practices for rights-impacting AI” that were designed to assess the impact of AI on “equity and fairness” to “mitigate algorithmic discrimination.” At the time, SIIA expressed concern that these requirements “will on balance impede the responsible adoption of AI without adding meaningful safeguards to affected individuals.”

Taken together, the new guidance will reduce compliance challenges on both vendors and the government. This is an innovation- and mission-driven approach that seems better designed to meet the objectives of improving governance and strengthening public trust in the federal government’s use of AI without overburdening limited government resources and impeding adoption of AI tools that can  address critical government needs.

Appointing Chief AI Officers and creating a new Chief AI Officer Council will improve agency adoption and interagency coordination.

We are pleased that the new OMB guidance calls for each agency to designate a Chief AI Officer. This was among the Biden-era reforms that SIIA has strongly supported. We are also pleased that the new guidance provides the CAIOs with greater flexibility to “promote AI innovation, adoption, and governance” aligned to the needs of their respective agencies.

The new guidance also fills a critical gap for interagency coordination by establishing a Chief AI Officer Council. A focal point for interagency coordination will, as OMB proposes, facilitate “[b]reaking barriers to AI adoption and ensuring the government is maximizing efficiency.”

OMB “playbooks” are an innovative way to advance adoption and limit the burden on agencies.

On the acquisition side, the new OMB guidance, like its predecessor, emphasizes performance-based acquisition and the need to avoid vendor lock-in. The new guidance goes further in recommending that agencies conduct broad market research and calling on OMB to develop “playbooks” for specific types of AI. This is a welcome addition. We have supported the work of NIST in developing playbooks for managing risk by sector and AI classification. But to date there has been no effort to complement this by developing playbooks for government innovation. Centralizing this function in OMB, rather than leaving each agency to develop its own approach, will assist the Chief AI Officer Council and the government at large in driving AI adoption.

Conclusion

The revised OMB memos represent an important step forward in modernizing the federal government, which SIIA outlined as a key policy priority for 2025. Incorporating AI into government operations can significantly enhance public services, improve operational efficiency, and help agencies deliver on their missions in a streamlined manner. As noted, the revised OMB memos work to fulfill this transformation by enabling greater flexibility for the procuring AI, and focusing on “high-impact” purposes of AI ensures that government oversight and resources are proportionate to potential risks. This approach strikes a balance between innovation and accountability. The technology industry plays a critical role in partnering with the government to build trustworthy AI solutions for the public sector. SIIA looks forward to engaging with the Trump Administration as it begins to implement these memos and develop other AI policies that will accelerate federal AI adoption and strengthen U.S. leadership in AI.