Stock Markets April 30, 2026 01:38 AM

APRA Tells Banks Frontier AI Could Accelerate and Amplify Cyber Attacks

Regulator warns information security practices are lagging as models like Anthropic's Mythos present new vulnerability discovery risks

By Sofia Navarro
APRA Tells Banks Frontier AI Could Accelerate and Amplify Cyber Attacks

Australia’s prudential regulator has warned that the pace of frontier artificial intelligence development is outstripping many banks' information security practices, and that models such as Anthropic’s Claude Mythos could enable larger, faster cyber attacks by enhancing the discovery of vulnerabilities. The Australian Prudential Regulation Authority (APRA) told banks it found overreliance on vendor presentations, gaps in board-level technical literacy, and security architectures not built for rapid AI-driven threat evolution. The issue has prompted government engagement with software providers and prompted commentary from industry and ratings agencies on potential credit implications.

Key Points

  • APRA found many banks' information security practices are not keeping up with the pace of AI development, increasing potential cyber risk to financial services.
  • Frontier AI models such as Anthropic’s Claude Mythos - with advanced coding capabilities - are highlighted as having the potential to enhance the discovery of vulnerabilities and thereby raise the probability, speed and scale of cyber attacks.
  • Industry and government responses include engagement with software providers and assertions by the Australian Banking Association and S&P Global that banks are investing heavily in security and that AI may have uneven credit impacts across the region.

Australia's banking sector has been put on notice by the country’s prudential regulator, which told lenders they are not keeping pace with a rapid wave of developments in artificial intelligence. In a formal letter to banks, the Australian Prudential Regulation Authority (APRA) said its review and industry consultation pointed to a widening gap between how AI is evolving and how some firms protect their information systems.

APRA flagged a specific category of risk arising from so-called frontier AI models - advanced systems with broad problem-solving and coding capabilities. The regulator singled out Anthropic's Claude Mythos as an example of a frontier model that could materially alter the cyber threat landscape by making it easier and quicker for malicious actors to find and exploit software vulnerabilities.

"It also warns frontier AI models such as Anthropic’s Claude Mythos, which could enhance the discovery of vulnerabilities by bad actors, are expected to further increase the probability, speed and scale of cyber attacks," APRA said in a statement referencing its review. The regulator added that most of the industry’s information security practices were struggling to match the rate of change in AI, and that the speed of AI development could pose a growing threat to Australia’s financial services.

Anthropic did not immediately respond to a request for comment. The Mythos model has been described as possessing high-level coding capabilities and those capabilities have, according to the assessment referenced by APRA, given it a potentially unprecedented ability to identify cybersecurity weaknesses, a concern that some experts have raised.

Anthropic has deployed Claude Mythos Preview under Project Glasswing, a controlled-access programme that includes participation by major technology firms such as Amazon, Microsoft, Nvidia and Apple. APRA said its consultations with regulated entities showed a clear acknowledgement that cyber practices need a marked improvement - a "step change" - together with a continuous uplift in capabilities to protect IT assets as threats evolve.

Part of APRA’s concern relates to how banks evaluate and integrate AI: the regulator observed many institutions were relying heavily on model presentations and summaries provided by vendors, without necessarily fully interrogating the operational or security risks those models could bring. At the board level, APRA said many directors are still developing the technical literacy necessary to provide effective challenge and oversight of AI-related risks.

While the regulator acknowledged that banks already operate under stringent security protocols, it warned that some of those procedures were not engineered to cope with the rapid advances in AI capability. In response, a spokesperson for Home Affairs Minister Tony Burke said Australia is engaging with software providers, including Anthropic, to address potential cybersecurity vulnerabilities.

The Australian Banking Association reiterated that banks continuously assess their cyber risk settings. Its Chief Executive, Simon Birmingham, said banks are well positioned to respond to emerging technologies and invest heavily in security measures. "Australian banks maintain strong cyber security defences, investing billions each year to ensure their systems remain secure and can shield against potential threats," he said.

Separately, ratings agency S&P Global warned that AI will affect the credit profiles of Asia Pacific financial institutions over the next one to five years. S&P Global noted that while most banks in the region have substantial technology budgets that should help mitigate some negative impacts - and that AI could even reduce costs - the broader financial services sector may experience uneven effects.

APRA’s communication to banks underscores evolving operational challenges as advanced AI models become more powerful. The regulator’s findings highlight areas for institutions to reinforce: vendor risk assessment, board technical competence, and security architectures designed specifically with rapid AI-driven threat evolution in mind.


Summary of the situation

  • APRA cautions that frontier AI systems can speed and scale cyber attacks by improving vulnerability discovery.
  • Banks are said to be relying too much on vendor-supplied model summaries and many boards currently lack full technical literacy on AI risks.
  • Government engagement with software providers and industry commentary from the Australian Banking Association and S&P Global underline the wider regulatory and credit implications.

Risks

  • Increased cybersecurity risk - Frontier AI could enable faster, larger-scale attacks if banks’ security controls and vendor risk assessments lag behind model capabilities (affects banking and financial services).
  • Governance and oversight gaps - Many boards currently lack the technical literacy to effectively challenge AI-related risks, potentially weakening institutional oversight (affects corporate governance across financial institutions).
  • Uneven credit impact - AI’s effects on costs and operational risk could lead to divergent credit outcomes across Asia Pacific financial institutions over the next one to five years (affects credit markets and financial stability considerations).

More from Stock Markets

Brockman Reveals Near-$30 Billion OpenAI Stake and Financial Links to Altman During Musk Trial May 4, 2026 California Launches Probe into Federal Deal That Scrapped Central Coast Offshore Wind Project May 4, 2026 Pilots Union Praises Kirby’s Merger Vision, Stops Short of Endorsing Deal May 4, 2026 Embraer Sees Follow-On Middle East Defense Sales After UAE C-390 Agreement May 4, 2026 Intel hires long-serving Qualcomm executive to oversee PCs and physical AI unit May 4, 2026