An advocacy organization is pressing the Trump administration to require security screening of advanced artificial intelligence models prior to their public release and to bar firms that do not pass those checks from receiving government contracts.
The group highlighted concerns about Anthropic's Mythos, saying that model could lower barriers to executing complex cyberattacks and therefore pose national security risks. It urged the administration to develop formal processes to evaluate upcoming frontier models from major developers for potential use in cyberattacks and weapons development.
In a letter to administration officials, the organization said companies would need to pass such reviews to remain eligible for federal contracts. The group noted that the U.S. Center for AI Standards and Innovation currently conducts reviews of some models under voluntary agreements with several developers, including OpenAI, Anthropic, Google, Microsoft and xAI.
The group recommended that CAISI - the U.S. Center for AI Standards and Innovation - spearhead the creation of mandatory requirements. It also asked Congress to establish a standing enforcement office within the U.S. Department of Commerce to ensure compliance with those rules.
The suggested criteria for mandatory oversight would apply to firms that either spend $100 million or more annually on compute to train frontier models, or that derive at least $500 million in yearly revenue from AI products and services. The group pointed out that California implemented comparable thresholds last year for safety reporting requirements.
Summary
The advocacy group seeks a formal vetting regime for frontier AI models to detect capabilities that could facilitate cyberattacks or weapons development. It proposes tying federal procurement eligibility to successful review, assigning CAISI a leadership role in developing standards, and creating a permanent Commerce enforcement office. The proposed oversight would focus on firms meeting specified compute-spend or AI-revenue thresholds.
Key points
- Advocacy group requests mandatory pre-release screening of advanced AI models to identify cyber and weapons-related risks - impacts technology, cybersecurity, and government procurement sectors.
- Review clearance would be a condition for federal contract eligibility, which could affect AI vendors that supply government agencies - impacting procurement and enterprise IT markets.
- CAISI would be charged with developing mandatory requirements and Congress asked to create a permanent enforcement office within the Department of Commerce; proposed thresholds target large-scale compute spenders and high-revenue AI providers.
Risks and uncertainties
- Potential exclusion from federal contracts for companies that fail review - a risk for AI developers that serve government clients, affecting revenues in the public-sector tech market.
- Uncertainty around how mandatory requirements would be established and enforced - creates regulatory ambiguity for firms meeting the specified compute or revenue thresholds.
- The scope of current voluntary reviews by CAISI covers some models, but the transition to mandatory standards and a Commerce enforcement office could introduce compliance challenges for large AI providers.