Overview
An advocacy group is urging the Trump administration to subject advanced artificial intelligence models to security screenings prior to public release and to make passing such evaluations a precondition for receiving U.S. government contracts. The call, issued Monday, highlights concerns that some cutting-edge models could enable faster and more complex cyberattacks, creating national security vulnerabilities.
Reason for the appeal
The group singled out Anthropic’s Mythos as an example of a model that could lower the barrier for executing intricate cyberattacks, raising alarm about potential national security consequences. Citing those risks, the organization urged the administration to design and implement methods for assessing upcoming frontier models from larger developers specifically for capabilities related to cyberattack facilitation and weapons development.
Suggested institutional role and enforcement
The U.S. Center for AI Standards and Innovation, which currently conducts voluntary reviews of some models, is named in the recommendation as the appropriate body to lead development of mandatory requirements. The group also proposed that Congress establish a permanent enforcement office within the U.S. Department of Commerce to oversee and ensure compliance with those requirements.
Scope of proposed requirements
The suggested set of rules would apply to companies that either spend $100 million or more annually on compute used to train frontier models, or that generate at least $500 million a year in revenue from AI products and services. The group noted that California enacted a similar threshold for safety reporting requirements in the previous year.
Current review practices
At present, the U.S. Center for AI Standards and Innovation reviews some models under voluntary arrangements with a number of developers. Those voluntary agreements include OpenAI and Anthropic, and more recently have extended to Google, Microsoft and xAI. The advocacy group's proposal would move the review process from a voluntary basis to mandatory standards for covered firms.
Policy link to contracting
Crucially, the group asked officials to condition eligibility for lucrative government contracts on successfully passing these security reviews. Under the proposal, failing the mandated screening would make a company ineligible for such contracts until it met the required standards.
Conclusion
The submission to the administration frames the adoption of mandatory vetting and an enforcement mechanism as steps to address risks associated with frontier AI models, especially where capabilities could be repurposed for cyberattacks or weapons development. The proposal also points to an existing state-level precedent for thresholds related to safety reporting.