Stock Markets April 15, 2026 12:26 AM

U.S. Agencies Quietly Vet Anthropic’s Claude Mythos Despite Presidential Restriction

Officials are assessing the model’s ability to find software vulnerabilities while navigating a formal halt on its use

By Ajmal Hussain
U.S. Agencies Quietly Vet Anthropic’s Claude Mythos Despite Presidential Restriction

Staff across multiple federal agencies have been testing Anthropic’s advanced AI model, Claude Mythos, to evaluate its potential for identifying critical software vulnerabilities. These efforts are occurring despite a presidential directive earlier this year that instructed agencies to stop using Anthropic’s technology. Officials see promise in the model for strengthening cyber defenses, and lawmakers have requested briefings amid concerns adversaries could develop similar tools. The White House says it continues engagement with AI firms to weigh security risks against national security considerations.

Key Points

  • Multiple federal agencies, including the Commerce Department’s Center for AI Standards and Innovation, have been evaluating Anthropic’s Claude Mythos model for its vulnerability-detection capabilities - sectors impacted: cybersecurity, government IT.
  • The testing is occurring despite a presidential directive earlier in the year instructing agencies to stop using Anthropic’s technology - sectors impacted: federal government, regulatory policy.
  • Lawmakers and congressional staff have requested briefings on the model, driven by concern that adversaries could develop similar tools - sectors impacted: national security, defense oversight.

Federal agency personnel have been conducting discreet evaluations of Anthropic’s new AI model, Claude Mythos, to judge its usefulness in identifying critical software vulnerabilities, according to people familiar with the matter. The assessments involve staff from several offices, including the Commerce Department’s Center for AI Standards and Innovation.

Those involved are examining the model for its reported ability to detect vulnerabilities that might escape human reviewers. The work is being carried out quietly, even though a presidential directive earlier this year ordered agencies to halt use of Anthropic’s technology following disputes over the firm’s position on military and surveillance applications.

Officials say the model’s technical strengths have prompted consideration for defensive cybersecurity purposes. The capability to find hard-to-detect flaws has been described by some involved as potentially important for strengthening cyber defenses, which has driven efforts to assess and, in some cases, deploy the technology despite the formal restriction.

Interest has extended to the legislative branch, where lawmakers and congressional staff have requested briefings on the system. Those inquiries reflect a shared concern that adversaries could develop comparable tools, increasing urgency among officials to understand the technology and its implications.

The White House has said it remains in dialogue with AI companies to address security risks, while attempting to balance those engagements with broader national security considerations. Beyond confirming ongoing interaction with industry, the White House statement did not provide additional operational detail on the agency-level evaluations.

This sequence of actions - quiet technical assessments, congressional briefings, and continued White House engagement - highlights a tension between a formal policy restriction directed at a particular vendor and the perceived operational need to understand and potentially use advanced AI capabilities for defensive purposes.


Contextual note: The information about agency testing comes from individuals familiar with the activity; agencies involved include the Commerce Department's Center for AI Standards and Innovation. The presidential directive to halt use of the technology was issued earlier this year and followed disagreements over Anthropic's stance on military and surveillance applications.

Risks

  • The ongoing quiet evaluations contrast with a formal presidential halt on the use of Anthropic’s technology, creating legal and policy uncertainty for agency deployments - impacts federal procurement and regulatory compliance.
  • Officials cite concern that adversaries may develop comparable AI tools, which raises risks for national cybersecurity posture and could drive urgent or uncoordinated adoption decisions - impacts defense and cyber-related markets.
  • Continued engagement between the White House and AI firms aims to balance security risks with national security considerations, indicating uncertainty about policy direction and operational use of advanced models - impacts government-technology relations.

More from Stock Markets

Australian Shares Slip as Healthcare, Financials and Gold Weigh on Index Apr 29, 2026 Fuchs posts Q1 results above forecasts, raises sales outlook for 2026 Apr 29, 2026 Huhtamaki Tops Q1 Expectations but Flags Rising Polymer Costs as Margin Risk Apr 29, 2026 Kambi Holds FY26 EBITA Target Despite €4m Colombia Tax Hit Apr 29, 2026 Pernod Ricard Calls Off Merger Negotiations With Brown-Forman Apr 29, 2026