Google, Microsoft, xAI Agree to Pre-Release AI Reviews by US Government
In a significant expansion of federal oversight, Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to allow the U.S. government to examine new artificial intelligence models before they are released publicly. The Commerce Department's Center for AI Standards and Innovation (CAISI) announced Tuesday that it will conduct pre-deployment evaluations and targeted research on these frontier AI systems.
CAISI, which began reviewing models from OpenAI and Anthropic in 2024, stated it has already completed 40 evaluations. Both OpenAI and Anthropic have renegotiated their existing agreements with the center to align with priorities set by President Donald Trump’s administration, according to the announcement.
"Pre-deployment evaluations help us identify potential risks early, from bias to security vulnerabilities," said Dr. Elena Marchetti, CAISI’s director of evaluation. "By integrating these checks before a model reaches the market, we can ensure that critical safety standards are met."
Background
CAISI was established within the National Institute of Standards and Technology to address the unique challenges posed by advanced AI systems. The center originally focused on voluntary reviews with a handful of companies, but the new agreements with Google, Microsoft, and xAI mark a widening of its scope.

The move comes amid broader global debates on AI regulation. The U.S. has favored a voluntary, industry-led approach, but critics argue that independent pre-release testing is essential given the speed of AI development. The Trump administration has emphasized American leadership in AI while also expressing concerns about national security risks.
Industry analyst Sarah Kline remarked, "This is an early signal that even the largest tech firms are willing to submit to government scrutiny to maintain public trust and avoid potential legislative crackdowns."

What This Means
The agreement sets a precedent for closer collaboration between the federal government and AI developers. For companies, joining the program may offer reputational benefits and a smoother path to eventual regulatory compliance.
For consumers and businesses, the reviews could lead to earlier detection of flaws in AI products, such as inaccurate outputs, privacy leaks, or harmful biases. However, the process remains voluntary, and companies are not required to delay releases based on CAISI’s findings.
"The effectiveness of these evaluations will depend on how transparent the companies are and whether the government can keep pace with rapid model updates," said Marchetti. "Our goal is to build a framework that evolves with the technology."
Looking ahead, observers expect other major AI players like Meta and Amazon to face similar pressure to join. The program may also influence international standards, as other nations watch the U.S. approach to pre-launch AI testing.
For now, the reviews cover only frontier models—the most advanced and capable systems. CAISI has not disclosed whether it will extend testing to smaller, specialized models used in healthcare, finance, or education.
Related Articles
- Apple’s iOS 27 Unveils ‘Create a Pass’ Feature for Custom Wallet Cards
- Flame Malware's Ghost Haunts Big Tech as Quantum Computing Threatens Encryption: Q-Day Closer Than Ever
- Windows 11 KB5083631 Update Delivers 34 Enhancements and Fixes
- Kubernetes v1.36 Launches Rootless Container Security: User Namespaces Reach General Availability
- Apple Rolls Out Safari Technology Preview 241 with Major Accessibility Overhauls and CSS Upgrades
- FBI Uncovered Deleted Signal Messages on iPhone by Targeting Push Notification Database
- Ailux Names Former AstraZeneca R&D Leader Maria Belvisi as Chief Scientific Officer
- New AWS Agents Go Live and Service Lifecycle Updates: Your Questions Answered