Overview
Federal Reserve Vice Chair for Supervision Michelle Bowman said Friday that financial regulators need to consider how to supervise new artificial intelligence technology exemplified by Anthropic PBC’s Mythos. Bowman emphasized that the rapid development of these models creates both defensive and offensive implications for the banking sector.
Opportunities and threats
Bowman noted that advanced AI tools can assist firms in identifying their own cyber vulnerabilities and improving cybersecurity practices. At the same time, she warned that if such technology is wielded with malicious intent, it could be used to discover and exploit system weaknesses.
Anthropic has limited the distribution of its most recent AI model while the company evaluates safety guardrails for the system. That decision has prompted officials in the Trump administration to consider the potential for new cyber attacks that could pose threats to financial stability.
Regulatory coordination and industry engagement
The Fed official said regulators must coordinate across government and engage directly with banks about emerging technology while continuing to support constructive development of AI tools. Bowman added that regulators are preparing a report outlining sound practices for the adoption and use of AI.
Bowman also characterized recent discussions between policymakers and financial firms as valuable. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with Wall Street banks last month to go over risks linked to Mythos and to confirm that lenders are taking measures to protect their systems, Bloomberg reported. Bowman said such meetings are extremely beneficial for protecting the banking system.
Summary of current posture
- Regulators are actively assessing supervisory approaches for new AI models like Anthropic’s Mythos.
- Authorities see dual-use potential: tools can help shore up defenses but could be repurposed by bad actors to find vulnerabilities.
- Coordination across government and engagement with banks, including high-level meetings with Wall Street, are underway and regulators plan to issue guidance on sound practices for AI.
Key points
- AI models can both strengthen and threaten bank cybersecurity, depending on how they are used - impacts the banking and financial services sector.
- Anthropic has curtailed wider release of its latest model while it assesses guardrails, a development that has elicited attention from government officials concerned about cyber risks - impacts regulatory policy and vendor technology deployment.
- Senior officials have met with banks to discuss safeguards and regulators are preparing a report on AI sound practices - impacts supervisory oversight and industry compliance efforts.
Risks and uncertainties
- Potential misuse of AI models to locate and exploit cyber vulnerabilities could threaten financial stability - risk to banking operations and market trust.
- Rapid capability development in AI complicates the timing and design of effective supervisory responses - regulatory uncertainty for banks and technology providers.
- Limited public detail on guardrails and defensive measures creates ambiguity about the sufficiency of protections - uncertainty for banks’ cybersecurity planning.