Australia's banking sector has been put on notice by the country’s prudential regulator, which told lenders they are not keeping pace with a rapid wave of developments in artificial intelligence. In a formal letter to banks, the Australian Prudential Regulation Authority (APRA) said its review and industry consultation pointed to a widening gap between how AI is evolving and how some firms protect their information systems.
APRA flagged a specific category of risk arising from so-called frontier AI models - advanced systems with broad problem-solving and coding capabilities. The regulator singled out Anthropic's Claude Mythos as an example of a frontier model that could materially alter the cyber threat landscape by making it easier and quicker for malicious actors to find and exploit software vulnerabilities.
"It also warns frontier AI models such as Anthropic’s Claude Mythos, which could enhance the discovery of vulnerabilities by bad actors, are expected to further increase the probability, speed and scale of cyber attacks," APRA said in a statement referencing its review. The regulator added that most of the industry’s information security practices were struggling to match the rate of change in AI, and that the speed of AI development could pose a growing threat to Australia’s financial services.
Anthropic did not immediately respond to a request for comment. The Mythos model has been described as possessing high-level coding capabilities and those capabilities have, according to the assessment referenced by APRA, given it a potentially unprecedented ability to identify cybersecurity weaknesses, a concern that some experts have raised.
Anthropic has deployed Claude Mythos Preview under Project Glasswing, a controlled-access programme that includes participation by major technology firms such as Amazon, Microsoft, Nvidia and Apple. APRA said its consultations with regulated entities showed a clear acknowledgement that cyber practices need a marked improvement - a "step change" - together with a continuous uplift in capabilities to protect IT assets as threats evolve.
Part of APRA’s concern relates to how banks evaluate and integrate AI: the regulator observed many institutions were relying heavily on model presentations and summaries provided by vendors, without necessarily fully interrogating the operational or security risks those models could bring. At the board level, APRA said many directors are still developing the technical literacy necessary to provide effective challenge and oversight of AI-related risks.
While the regulator acknowledged that banks already operate under stringent security protocols, it warned that some of those procedures were not engineered to cope with the rapid advances in AI capability. In response, a spokesperson for Home Affairs Minister Tony Burke said Australia is engaging with software providers, including Anthropic, to address potential cybersecurity vulnerabilities.
The Australian Banking Association reiterated that banks continuously assess their cyber risk settings. Its Chief Executive, Simon Birmingham, said banks are well positioned to respond to emerging technologies and invest heavily in security measures. "Australian banks maintain strong cyber security defences, investing billions each year to ensure their systems remain secure and can shield against potential threats," he said.
Separately, ratings agency S&P Global warned that AI will affect the credit profiles of Asia Pacific financial institutions over the next one to five years. S&P Global noted that while most banks in the region have substantial technology budgets that should help mitigate some negative impacts - and that AI could even reduce costs - the broader financial services sector may experience uneven effects.
APRA’s communication to banks underscores evolving operational challenges as advanced AI models become more powerful. The regulator’s findings highlight areas for institutions to reinforce: vendor risk assessment, board technical competence, and security architectures designed specifically with rapid AI-driven threat evolution in mind.
Summary of the situation
- APRA cautions that frontier AI systems can speed and scale cyber attacks by improving vulnerability discovery.
- Banks are said to be relying too much on vendor-supplied model summaries and many boards currently lack full technical literacy on AI risks.
- Government engagement with software providers and industry commentary from the Australian Banking Association and S&P Global underline the wider regulatory and credit implications.