Economy May 1, 2026 02:19 PM

Fed’s Bowman Urges Regulators to Weigh Supervision for Emerging AI Tools

Officials call for cross-agency coordination and bank engagement as models like Anthropic’s Mythos highlight cyber risks and defensive uses

By Marcus Reed
Fed’s Bowman Urges Regulators to Weigh Supervision for Emerging AI Tools

Federal Reserve Vice Chair for Supervision Michelle Bowman said regulators must evaluate how to supervise rapidly advancing artificial intelligence tools such as Anthropic PBC’s Mythos. While the technology can help banks identify vulnerabilities and bolster cyber defenses, officials warned it could also be misused to locate and exploit weaknesses. Regulators are coordinating with banks and preparing guidance on sound practices for AI adoption.

Key Points

  • AI models can both enhance and undermine bank cybersecurity depending on their use - impacts banking and financial services.
  • Anthropic has restricted release of its latest model while evaluating guardrails, drawing attention from government officials - impacts vendor deployment and regulatory scrutiny.
  • Regulators are coordinating with banks and preparing a report on sound practices for AI adoption - impacts supervisory guidance and compliance efforts.

Overview

Federal Reserve Vice Chair for Supervision Michelle Bowman said Friday that financial regulators need to consider how to supervise new artificial intelligence technology exemplified by Anthropic PBC’s Mythos. Bowman emphasized that the rapid development of these models creates both defensive and offensive implications for the banking sector.


Opportunities and threats

Bowman noted that advanced AI tools can assist firms in identifying their own cyber vulnerabilities and improving cybersecurity practices. At the same time, she warned that if such technology is wielded with malicious intent, it could be used to discover and exploit system weaknesses.

Anthropic has limited the distribution of its most recent AI model while the company evaluates safety guardrails for the system. That decision has prompted officials in the Trump administration to consider the potential for new cyber attacks that could pose threats to financial stability.


Regulatory coordination and industry engagement

The Fed official said regulators must coordinate across government and engage directly with banks about emerging technology while continuing to support constructive development of AI tools. Bowman added that regulators are preparing a report outlining sound practices for the adoption and use of AI.

Bowman also characterized recent discussions between policymakers and financial firms as valuable. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with Wall Street banks last month to go over risks linked to Mythos and to confirm that lenders are taking measures to protect their systems, Bloomberg reported. Bowman said such meetings are extremely beneficial for protecting the banking system.


Summary of current posture

  • Regulators are actively assessing supervisory approaches for new AI models like Anthropic’s Mythos.
  • Authorities see dual-use potential: tools can help shore up defenses but could be repurposed by bad actors to find vulnerabilities.
  • Coordination across government and engagement with banks, including high-level meetings with Wall Street, are underway and regulators plan to issue guidance on sound practices for AI.

Key points

  • AI models can both strengthen and threaten bank cybersecurity, depending on how they are used - impacts the banking and financial services sector.
  • Anthropic has curtailed wider release of its latest model while it assesses guardrails, a development that has elicited attention from government officials concerned about cyber risks - impacts regulatory policy and vendor technology deployment.
  • Senior officials have met with banks to discuss safeguards and regulators are preparing a report on AI sound practices - impacts supervisory oversight and industry compliance efforts.

Risks and uncertainties

  • Potential misuse of AI models to locate and exploit cyber vulnerabilities could threaten financial stability - risk to banking operations and market trust.
  • Rapid capability development in AI complicates the timing and design of effective supervisory responses - regulatory uncertainty for banks and technology providers.
  • Limited public detail on guardrails and defensive measures creates ambiguity about the sufficiency of protections - uncertainty for banks’ cybersecurity planning.

Risks

  • Misuse of AI models to identify and exploit cyber vulnerabilities could threaten financial stability - affects banking operations and market confidence.
  • The fast pace of AI capability development creates challenges for timely and effective regulatory responses - leads to regulatory uncertainty for banks and tech firms.
  • Scarcity of public detail on guardrails and protective measures leaves banks unclear about sufficiency of defenses - complicates cyber risk management.

More from Economy

Yen Holds Near Two-Month High as Dollar Strengthens on Middle East Tensions May 4, 2026 Venezuela’s Monthly Inflation Falls to 10.6% in April, Central Bank Reports May 4, 2026 Customs Agency Says First Electronic Refunds for Trump's Tariffs Could Begin May 12 May 4, 2026 Iran's Araghchi Says Military Action Won't Resolve Hormuz Standoff, Voices Cautious Hope on Pakistan-Brokered Talks May 4, 2026 Westpac’s H1 profit underperforms as margins and credit charges weigh May 4, 2026