OpenAI has begun providing verified European companies with access to its latest AI models, including GPT-5.5-Cyber, as part of an effort to strengthen corporate resilience to vulnerabilities in their systems. The program, described by the company as "Trusted Access for Cyber," targets organisations in critical sectors such as financial services, telecoms, energy and public services.
The initiative will make advanced models available to a range of firms, including Deutsche Telekom and BBVA, along with dozens of other European companies. Access under the programme includes specific safeguards intended for defensive cybersecurity work, OpenAI said.
OpenAI's managing director for Europe, the Middle East and Africa, Emmanuel Marill, framed the approach as a balance between access, usefulness and safety as AI systems increase in capability. "We need to block dangerous activity, while making sure trusted defenders have tools that are genuinely useful in protecting systems, finding vulnerabilities and responding to threats quickly," he said on Tuesday.
The move follows concerns raised by the arrival of competing frontier models. The release of Mythos by Anthropic last month has been cited as raising the stakes for firms such as banks and other companies, because such models can perform high-level coding tasks that enable them to identify cybersecurity weaknesses and, potentially, ways to exploit them.
OpenAI has offered the European Commission open access to its cybersecurity features, according to statements attributed to Brussels, while the commission has said Anthropic has not been as forthcoming on similar terms. Within OpenAI, the "OpenAI for Countries" initiative is being led by former British finance minister George Osborne, who on Monday sent an explanatory letter to the Commission arguing that broader access to defensive tools could bolster shared security, support public safety and align with European priorities.
In parallel with the trusted-access program, OpenAI said on Monday it was creating a new company with more than $4 billion in initial investment to help organisations build and deploy AI systems, and that it would acquire AI consulting firm Tomoro to accelerate the new unit's scale-up.
Context and implications
OpenAI's programme is designed to equip vetted organisations with advanced model capabilities and tailored safeguards so they can perform defensive tasks - from vulnerability discovery to incident response - without enabling misuse. The arrangement explicitly focuses on trusted entities in sectors where failures could have systemic effects.