World May 11, 2026 06:04 AM

Advocacy Group Urges U.S. to Block Government Contracts for AI Models That Fail Security Screening

Americans for Responsible Innovation asks the Trump administration to vet frontier models for cyberattack and weapons risks and to link contract eligibility to passing reviews

By Ajmal Hussain

An industry advocacy group has called on the Trump administration to require security evaluations of advanced AI models before public release and to deny lucrative government contracts to companies whose models fail those assessments. The appeal, prompted in part by concerns about Anthropic’s Mythos, asks U.S. officials to develop mandatory vetting procedures for frontier models and to establish a permanent enforcement office to ensure compliance.

Advocacy Group Urges U.S. to Block Government Contracts for AI Models That Fail Security Screening

Key Points

  • An advocacy group urged the Trump administration to screen advanced AI models for security threats before public release and to deny government contracts to those that fail review - impacts government procurement and AI developers.
  • The U.S. Center for AI Standards and Innovation presently conducts voluntary reviews with major developers and is recommended to lead development of mandatory requirements - impacts regulatory frameworks and compliance operations.
  • Proposed requirements would target companies spending $100 million or more annually on compute for frontier models, or earning $500 million or more in AI-related revenue; California has a comparable threshold - impacts large AI firms and cloud compute spending.

Overview

An advocacy group is urging the Trump administration to subject advanced artificial intelligence models to security screenings prior to public release and to make passing such evaluations a precondition for receiving U.S. government contracts. The call, issued Monday, highlights concerns that some cutting-edge models could enable faster and more complex cyberattacks, creating national security vulnerabilities.

Reason for the appeal

The group singled out Anthropic’s Mythos as an example of a model that could lower the barrier for executing intricate cyberattacks, raising alarm about potential national security consequences. Citing those risks, the organization urged the administration to design and implement methods for assessing upcoming frontier models from larger developers specifically for capabilities related to cyberattack facilitation and weapons development.

Suggested institutional role and enforcement

The U.S. Center for AI Standards and Innovation, which currently conducts voluntary reviews of some models, is named in the recommendation as the appropriate body to lead development of mandatory requirements. The group also proposed that Congress establish a permanent enforcement office within the U.S. Department of Commerce to oversee and ensure compliance with those requirements.

Scope of proposed requirements

The suggested set of rules would apply to companies that either spend $100 million or more annually on compute used to train frontier models, or that generate at least $500 million a year in revenue from AI products and services. The group noted that California enacted a similar threshold for safety reporting requirements in the previous year.

Current review practices

At present, the U.S. Center for AI Standards and Innovation reviews some models under voluntary arrangements with a number of developers. Those voluntary agreements include OpenAI and Anthropic, and more recently have extended to Google, Microsoft and xAI. The advocacy group's proposal would move the review process from a voluntary basis to mandatory standards for covered firms.

Policy link to contracting

Crucially, the group asked officials to condition eligibility for lucrative government contracts on successfully passing these security reviews. Under the proposal, failing the mandated screening would make a company ineligible for such contracts until it met the required standards.

Conclusion

The submission to the administration frames the adoption of mandatory vetting and an enforcement mechanism as steps to address risks associated with frontier AI models, especially where capabilities could be repurposed for cyberattacks or weapons development. The proposal also points to an existing state-level precedent for thresholds related to safety reporting.

Risks

  • Frontier models such as Anthropic’s Mythos could enable more rapid, complex cyberattacks, posing national security risks - affects cybersecurity and defense-related procurement.
  • Moving from voluntary review to mandatory requirements creates enforcement and compliance uncertainties, including the need for a permanent oversight office within the Department of Commerce - affects regulatory bodies and firms' legal compliance teams.
  • Companies failing the proposed security screening could be excluded from lucrative government contracts, creating financial and market-access risks for large AI developers reliant on public-sector business.

More from World

Kyrgyz Authorities Charge Former Security Chief and Seven Others Over Alleged Coup Plot May 12, 2026 Trump Says Cuba Has Requested Help, Offers No Details on U.S. Plans May 12, 2026 Saudi Air Force Carried Out Covert Retaliatory Strikes on Iranian Soil, Sources Say May 12, 2026 Senator Ronald dela Rosa Seeks Shelter in Senate as ICC Arrest Warrant Is Unsealed May 12, 2026 UNICEF: 70 Children Killed in West Bank and East Jerusalem Since Early 2025 May 12, 2026