Trump administration announces overhaul of federal AI policy

The White House Office of Management and Budget (OMB) this week advanced a major policy initiative fulfilling President Trump’s Executive Order, Removing Barriers to American Leadership in Artificial Intelligence, with the release of two transformative memos, M-25-21 and M-25-22. These revised policies outline the federal government’s updated approach to AI use and procurement, and signal a paradigm shift in federal policy away from outdated bureaucracy and toward a forward-leaning, innovation-focused administration.
“President Trump recognizes that AI is a technology that will define the future. This administration is focused on encouraging and promoting American AI innovation and global leadership, which starts with utilizing these emerging technologies within the federal government, said Lynne Parker, Principal Deputy Director of the White House Office of Science and Technology Policy (OSTP). These “revised memos offer much needed guidance on AI adoption and procurement that will remove unnecessary bureaucratic restrictions, allow agencies to be more efficient and cost-effective, and support a competitive American AI marketplace.”
Trump’s Executive Order underscores the urgency of positioning the U.S. at the forefront of global AI innovation. To this end, OMB, in close collaboration with OSTP and the Assistant to the President for Science and Technology, crafted M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, and M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government. The two memos replace previous guidance with streamlined strategies that prioritize national competitiveness, data privacy, public trust, and operational excellence.
For their part, industry experts are cautiously optimistic about the new guidance. While the memos emphasize innovation and risk management, there is uncertainty about their practical implementation. Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, noted that the guidance maintains core AI governance requirements but emphasized that its faithful execution remains to be seen. He also expressed concerns regarding the Department of Government Efficiency’s potential use of AI systems that might compromise sensitive data, highlighting the need for timely and transparent implementation across all federal agencies.
Cody Venzke, senior policy counsel with the American Civil Liberties Union (ACLU), acknowledged the administration’s commitment to ensuring AI systems are safe, fair, and aligned with the public interest, but also emphasized the importance of agencies verifying that AI tools are fair and safe before deployment, especially when such tools influence critical decisions affecting individuals’ lives. The ACLU highlighted the necessity for ongoing monitoring and human oversight to maintain public trust in AI applications.
Legal experts, meanwhile, observed that the new memos, while introducing a distinct approach, build upon themes from previous administrations concerning responsible AI use within the federal government.
M-25-21 lays the foundation for accelerating AI integration across federal agencies by removing structural bottlenecks and reimagining governance models. The memorandum instructs agencies to implement AI not as a peripheral enhancement, but as a mission-critical tool for modern public service. It establishes three central pillars – innovation, governance, and public trust – as the basis of AI deployment strategy.
Central to M-25-21 is the newly expanded role of the Chief AI Officer (CAIO). Each agency must designate a CAIO with the authority to lead strategic AI initiatives and ensure compliance with governance standards. These officers are now positioned as champions of innovation rather than mere compliance administrators. They are tasked with evaluating AI maturity within their agencies, identifying opportunities for growth, and overseeing risk management, especially for high-impact AI systems that influence legally binding decisions or affect the rights and safety of the public.
The memo redefines AI governance as an enabler, not an obstacle. Rather than layering new approval processes atop existing structures, M-25-21 aligns AI accountability with current IT governance models. Agencies are encouraged to promote rapid innovation by establishing clear expectations and delegating decision-making power, particularly when deploying AI tools in contexts such as healthcare, public safety, or benefits administration.
Public trust is a consistent theme in the policy’s language. M-25-21 mandates agencies to identify and report AI use cases annually, with special attention paid to “high-impact” applications, which are defined as AI systems whose outputs serve as a principal basis for decisions that have legal, material, or safety implications. In these cases, agencies must conduct risk assessments, maintain rigorous performance monitoring, and suspend use if systems fail to meet safety, fairness, or transparency criteria.
Meanwhile, M-25-22 complements these efforts by overhauling federal acquisition strategies to ensure agencies can efficiently and responsibly procure AI technologies. The memo responds to long-standing frustrations with procurement delays and inefficiencies and directs agencies to adopt performance-based contracting and to reduce their reliance on a narrow pool of vendors.
The policy also places emphasis on American-made innovation. Consistent with the Executive Order’s mandate to “buy American,” agencies are encouraged to prioritize domestically developed AI systems, which is aimed at promoting national security and economic strength by ensuring that critical government infrastructure relies on trustworthy, transparent, and secure technologies.
Greg Barbaccia from the Office of the Federal Chief Information Officer emphasized that these memos help agencies overcome stagnation created by outdated acquisition models. According to him, the revised approach reflects a government-wide commitment to fiscal responsibility, innovation, and public trust.
“Federal agencies have experienced a widening gap in adopting AI and modernizing government technology, largely due to unnecessary bureaucracy and outdated procurement processes. OMB’s new policies demonstrate that the government is committed to spending American taxpayer dollars efficiently and responsibly, while increasing public trust through the Federal use of AI,” Barbaccia said.
A key innovation in M-25-22 is the requirement for cross-functional acquisition teams. When planning to procure AI systems, agencies must convene internal experts from legal, IT, procurement, privacy, civil liberties, and operational backgrounds to evaluate AI’s potential impact, especially in high-risk or sensitive domains. This collaborative structure is supposed to ensure that decisions are informed by diverse perspectives and aligned with agency missions.
M-25-22 also addresses vendor lock-in – which has been a significant challenge in past technology contracts – by requiring agencies to prioritize interoperability, transparency, and open standards. Solicitation language must reflect these priorities, ensuring that the government retains sufficient data and intellectual property rights to adapt, modify, and monitor AI systems over time.
To support agencies in implementing the new requirements, the General Services Administration is developing a shared repository of best practices, sample contract language, and procurement templates tailored to various types of AI such as biometrics and generative AI. These resources are designed to standardize approaches across government and reduce the learning curve for agency acquisition teams.
Both memoranda emphasize the development of an AI-ready workforce. Agencies must invest in staff training, both for technical specialists and non-technical employees who may interact with AI tools. Through centralized government-wide training programs and internal capacity-building initiatives, the memos aim to democratize AI literacy and foster a culture of continuous improvement.
Trump’s vision, as expressed in his Executive Order, is not merely to use AI more, but to use AI better. Through M-25-21 and M-25-22, the administration has provided a roadmap it says will achieve that goal.
Importantly, both memos establish mechanisms for accountability and transparency. Agencies must publish AI strategies, risk assessments, and inventories of AI use cases. High-impact AI applications must undergo pre-deployment testing, periodic audits, and independent reviews. If a system is found to pose undue risks or fails to deliver on its intended performance, agencies are directed to modify or discontinue its use.
The practical effects of the new policies are being touted by highlighting pioneering use cases such as the Department of Justice’s (DOJ) deploying AI to deepen its understanding of the global drug market. By analyzing data patterns and market behaviors, DOJ is using AI tools to disrupt drug trafficking networks. DOJ says its application of AI enhances public safety by enabling faster, more targeted investigations and interventions.
DOJ’s December 2024 report, Artificial Intelligence and Criminal Justice, outlines how AI is being implemented in areas including sentencing, pretrial decisions, risk assessments, police surveillance, predictive policing, forensic analysis, and prison management. While recognizing the potential of AI to enhance fairness, efficiency, and accuracy, the report also underscores the risks these technologies pose, especially regarding bias, privacy violations, and the potential erosion of civil liberties.
A significant portion of the report addresses AI-driven identification and surveillance technologies, especially the growing use of biometrics. Facial recognition technology receives close scrutiny, with the report noting its expanded use by federal and local agencies for identifying suspects, witnesses, and missing persons. While acknowledging its operational benefits, the report emphasizes persistent concerns over misidentification and its chilling effect on constitutionally protected activities like protest and anonymous assembly. It highlights the variation in policies among jurisdictions and recommends consistent safeguards and oversight to prevent civil rights infringements.
The report further evaluated AI applications in forensic analysis, where technologies like machine learning are being used to interpret DNA evidence, digital forensics, and drug identification. These tools, while promising faster and more accurate outcomes, still face challenges in transparency and evidentiary validation. The complexity of AI systems often makes it difficult to explain forensic conclusions in court, raising questions about due process and the reliability of evidence.
Ultimately, the report stresses that the adoption of AI in criminal justice must not come at the cost of constitutional rights, public trust, or ethical accountability. The deployment of these tools should be carefully controlled, subject to public input, and continually evaluated to ensure they promote fairness and justice in the American legal system.
Article Topics
AI | biometrics | facial recognition | government purchasing | research and development | U.S. Government | United States
Comments