Stock Markets May 5, 2026 06:06 PM

U.S. Expands Government Stress Tests to Cover Google DeepMind, xAI and Microsoft AI Models

Federal scientists will gain access to unreleased models to probe cyber, biosecurity and data-integrity vulnerabilities alongside OpenAI and Anthropic

By Jordan Park MSFT

The U.S. government has broadened a program that gives federal AI researchers access to unreleased models for risk assessment, adding Google DeepMind, xAI and Microsoft to participants. OpenAI and Anthropic were already collaborating with the U.S. Center for AI Standards and Innovation to test models for vulnerabilities. The scientists are focused on demonstrable risks including cyberattacks, development of chemical or biological weapons and corruption or leakage of training data.

U.S. Expands Government Stress Tests to Cover Google DeepMind, xAI and Microsoft AI Models
MSFT

Key Points

  • U.S. government scientists under the U.S. Center for AI Standards and Innovation will now test unreleased models from Google DeepMind, xAI and Microsoft in addition to those from OpenAI and Anthropic - impacts the cybersecurity and cloud services sectors.
  • The scientific team is focused on demonstrable risks, including the potential for AI to enable cyberattacks, facilitate development of chemical or biological weapons, or corrupt training data - implications for public safety, health data privacy and national security.
  • Companies are providing varying levels of access: OpenAI is testing GPT-5.5-Cyber; Microsoft will build shared datasets and workflows; Anthropic supplied public and unreleased models and documentation; DeepMind will provide proprietary models and data.

WASHINGTON, May 5 - The federal administration announced an expansion of a program that allows U.S. government scientists to evaluate unreleased artificial intelligence models for security and safety vulnerabilities. The enlargement brings Google’s DeepMind, xAI and Microsoft into the program alongside firms that had already partnered with the U.S. Center for AI Standards and Innovation, known as CAISI.

CAISI is made up of U.S. government scientists tasked with testing advanced AI systems for concrete threats. On its website the group says it is concentrating on "demonstrable risks," a term it uses to describe situations where advanced models could be misused in ways that pose clear dangers to national systems and public safety.

Scope of the risks under review

The scientists' stated priorities include preventing the use of AI to carry out cyberattacks on American infrastructure, limiting opportunities for adversaries to exploit AI to develop chemical or biological weapons, and stopping the corruption of data used to train U.S. models. CAISI has also signaled concern about model behaviors that could result in the exposure of private health information or the dissemination of incorrect answers.

What companies are providing

OpenAI confirmed it is collaborating with CAISI to test a model variant called GPT-5.5-Cyber. The company said the variant is a version of its most recent model tailored for defensive cybersecurity tasks, and the disclosure was made in a LinkedIn post by Chris Lehane, OpenAI's head of global affairs.

Microsoft said it will work with government scientists to develop shared datasets and workflows intended to assess advanced AI models, but the company did not identify specific models that will be submitted for testing.

Anthropic has provided CAISI access to both publicly available and unreleased models for probing vulnerabilities through "red-teaming" exercises that simulate malicious behavior. In September the company said it had also supplied detailed documentation on known vulnerabilities and the safety mechanisms it uses.

A spokesperson for Google DeepMind said the research arm of Alphabet will offer access to its proprietary models and related data for assessment. xAI did not immediately respond to a request for comment.

Findings and prior work

Work already conducted with CAISI has unearthed concrete weaknesses that the companies say they have addressed. Anthropic reported that certain tactics - including falsely claiming a human had reviewed model output or substituting characters - could be used to bypass safety controls, and the company said those issues were patched.

OpenAI described earlier collaboration with CAISI that probed its ChatGPT Agent and identified an exploit which, if successful, could have allowed a sophisticated actor to take remote control of computer systems accessible to the agent during a session and to impersonate the user on other websites where the user was logged in. OpenAI said it worked with the government scientists to examine and remediate such vulnerabilities.

In 2023 several major AI developers, including the companies noted above as well as Meta, Amazon and Inflection AI, agreed to allow independent experts to review their models for biosecurity and cybersecurity risks.

Government scientists who now operate under CAISI were organized under a different name during the administration of former President Joe Biden and during that earlier work released voluntary guidelines intended to reduce risks such as the leakage of private health information and the generation of incorrect outputs. CAISI is now working on guidelines aimed at critical infrastructure providers - naming communications and emergency services as examples - to help those sectors test their own AI systems.


Implications and next steps

The expanded access is intended to give federal researchers a broader view of potential vulnerabilities across multiple large AI developers, while companies retain responsibility for addressing and patching identified issues. CAISI's work focuses on risks that are demonstrable and that can be tested through hands-on access to models and related documentation. The group is also pursuing sector-specific guidance to help critical infrastructure operators assess the AI tools they deploy.

Risks

  • Risk that advanced models could be used to launch cyberattacks against U.S. infrastructure - affects utilities, communications and emergency services sectors.
  • Risk that AI could be misused to assist in the development of chemical or biological weapons - relevant to national security and public health sectors.
  • Risk of models leaking private health information or producing incorrect answers, which could harm healthcare providers and patients and undermine trust in AI systems.

More from Stock Markets

Apple Agrees to $250 Million Settlement Over Delayed Siri AI Upgrades May 5, 2026 Mexico's S&P/BMV IPC Climbs Nearly 2% as Industrials and Consumer Stocks Lead Gains May 5, 2026 Colombian Stocks Close Slightly Higher as Industrials and Services Lift COLCAP May 5, 2026 Moscow equities finish higher as oil, mining and power stocks lead gains May 5, 2026 LATAM Lowers 2026 EBITDA Guidance as Jet Fuel Spike Squeezes Costs May 5, 2026