The European Commission has begun assessing the potential risks posed by Anthropic’s controversial AI model Mythos, signaling that European regulators are increasingly alarmed about how advanced artificial intelligence systems could destabilize financial networks and accelerate cyberattacks against critical infrastructure.
European Economic Commissioner Valdis Dombrovskis said Monday that European Union officials had already held discussions with Anthropic and received technical briefings about the model’s cyber capabilities as authorities weigh whether existing EU regulations are sufficient to address the emerging risks.
“The commission representatives met with Anthropic and was briefed on technical details around cyber capabilities and the risk of this Mythos preview, so we are currently assessing possible implications in light of the EU policies and legislation,” Dombrovskis told reporters.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The comments mark one of the clearest indications yet that European policymakers are moving beyond theoretical debates about artificial intelligence and beginning to confront what regulators increasingly view as immediate operational and national security risks tied to advanced AI systems.
Anthropic’s Mythos model, which cybersecurity experts say is capable of identifying vulnerabilities in software code at unprecedented speed and scale, is at the center of those concerns. Security researchers fear such systems could dramatically compress the timeline between the discovery of software flaws and their exploitation by malicious actors.
Traditionally, hackers often required weeks or months to weaponize newly discovered vulnerabilities. Advanced AI systems, however, are increasingly believed capable of automating large parts of that process, potentially enabling sophisticated attacks within hours of a flaw becoming public.
That shift is raising alarm across governments, banks, and critical infrastructure operators globally. European regulators appear especially focused on the financial sector, where the growing integration of AI into banking operations is expanding the potential attack surface for cybercriminals and hostile state actors.
Although Mythos has not yet been made available to European banks, officials appear concerned that the technology’s capabilities could still indirectly influence the threat environment by empowering hackers, ransomware groups, or state-backed cyber operations targeting financial institutions.
The scrutiny also comes as the European Union aggressively expands its oversight of artificial intelligence under its landmark AI Act, which introduces some of the world’s strictest rules governing high-risk AI applications.
Under the framework, systems deemed capable of threatening public safety, financial stability, or critical infrastructure could face heightened transparency, compliance, and risk-management obligations.
European authorities are increasingly trying to determine whether frontier AI models with offensive cyber capabilities should fall into those categories.
The discussions with Anthropic are also seen as part of a global shift in regulatory thinking. For much of the past two years, policymakers focused primarily on issues such as misinformation, copyright disputes, and labor disruption linked to generative AI. More recently, however, attention has rapidly pivoted toward cybersecurity and systemic infrastructure risk.
Banks in particular have become a central point of concern. Financial institutions operate massive interconnected digital ecosystems containing sensitive consumer data, payment systems, and real-time transaction infrastructure. A highly capable AI-assisted cyberattack targeting those systems could potentially trigger cascading disruptions far beyond a single institution.
Regulators in several jurisdictions have already begun quietly reassessing cyber response frameworks as AI models become more advanced. In the United States, cybersecurity officials are reportedly considering dramatically shortening deadlines for patching critical software vulnerabilities amid fears that AI-powered hacking systems could sharply accelerate exploitation timelines.
Meanwhile, financial institutions globally are racing to test their own defensive AI systems while simultaneously restricting employee access to certain external models over data security concerns.
The growing unease surrounding Mythos illustrates a broader paradox emerging in the AI race. The same technologies being marketed as productivity breakthroughs for coding, automation, and enterprise software development are also creating entirely new categories of cyber risk.
That tension is becoming especially pronounced in Europe, where regulators have historically taken a more cautious approach toward large technology platforms and data governance than their American counterparts. The European Commission’s engagement with Anthropic suggests Brussels does not intend to remain passive as frontier AI systems evolve into increasingly powerful cyber tools.
The scrutiny appears to foreshadow tougher oversight requirements around model deployment, capability disclosures, and access restrictions, particularly for systems capable of offensive cybersecurity applications. It is also seen as another sign that the AI boom is rapidly becoming inseparable from the next phase of the global cybersecurity arms race.



