The U.S. government is exploring the idea of instituting a review mechanism for artificial intelligence models prior to their public release, according to people familiar with internal discussions. Administration officials have discussed using an executive order to establish an AI working group composed of both technology executives and government officials, charged with examining potential oversight procedures for new models.
Those involved in the conversations say the working group would serve as a forum to evaluate how oversight could be structured, including whether a formal, centralized government review process should be applied to AI systems before they reach the public. The scope of the group and the exact procedures under consideration have not been detailed by the officials who spoke about the discussions.
Last week, White House representatives met with executives from Anthropic, Google and OpenAI to outline and discuss aspects of the administration's plans, according to people briefed on the meetings. The meetings were described as part of the broader conversation about how the government and private sector can coordinate on AI safety and oversight.
Participants in the technology industry and members of the administration have noted that one possible template for the proposed U.S. approach is a review framework being developed in Britain. Under that British approach, several government bodies would be assigned responsibilities to verify that AI models satisfy particular safety standards prior to or as part of their deployment.
Officials and industry participants have not released detailed requirements or a finalized structure for any U.S. review regime. The discussions are ongoing, and the administration has not announced a formal policy or published specific regulatory text tying down how oversight would be implemented.
Summary
U.S. officials are considering an executive order to form an AI working group that would bring tech executives and government officials together to analyze oversight options, including a potential pre-release review process. White House officials met with leaders from Anthropic, Google and OpenAI as part of these talks. A possible model for U.S. action would mirror a British framework assigning government bodies to check AI safety standards.