Coinbase is reportedly in talks with Anthropic to gain access to Claude Mythos Preview often called Mythos, Anthropic’s highly restricted frontier AI model with exceptional cybersecurity capabilities.
This development, first highlighted by The Information and echoed across multiple outlets, stems from crypto firms’ efforts to strengthen defenses against increasingly sophisticated AI-powered threats. Other players like Binance and Fireblocks have also explored or used earlier Anthropic models such as Claude Opus for vulnerability testing and pentesting.
Mythos is Anthropic’s most capable unreleased model to date, showing a major leap in coding, reasoning, and agentic abilities. It excels at autonomously discovering and exploiting software vulnerabilities—including zero-days in major operating systems, web browsers, and other critical software that had evaded human reviewers and automated tools for years or decades in some cases.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
Anthropic has chosen not to release it publicly due to dual-use risks: the same strengths that make it a powerful defensive tool (finding flaws at scale) could enable offensive cyberattacks if misused. Instead, they’ve launched Project Glasswing, a defensive cybersecurity initiative.
This provides limited, vetted access to Mythos Preview for select partners—focusing on securing critical software and open-source infrastructure—along with up to $100M in usage credits and $4M in donations. Coinbase’s Chief Security Officer, Philip Martin, has noted that models like Mythos will accelerate digital threats as well as digital defense, emphasizing the need for proactive, scalable testing of systems.
Why Coinbase and crypto firms are interested
Crypto exchanges and custodians handle massive value in digital assets and face persistent threats: hacks, phishing, smart contract exploits, and now AI-augmented attacks that can chain vulnerabilities rapidly. Mythos could help by: Performing deep, automated pentesting on infrastructure. Identifying subtle weaknesses in code, wallets, or custody systems that human teams might miss.
Enabling faster response to emerging AI-driven threats. This fits broader industry moves—Binance and Fireblocks have already used prior Anthropic models to uncover issues missed by traditional testing. Coinbase has a history of security incidents including data exposures, so bolstering defenses with frontier AI makes strategic sense.
Access remains tightly controlled under Project Glasswing, with mitigations like monitoring to prevent misuse. Anthropic prioritizes defensive applications while acknowledging the model’s potency. This reflects a growing realization in tech and finance: as AI coding and reasoning capabilities advance rapidly, the offense-defense balance in cybersecurity is shifting.
Governments and institutions have expressed concerns about Mythos-level models enabling autonomous attacks on defended systems, though real-world tests on hardened targets are limited. For Coinbase, securing access could enhance operational resilience amid rising AI threats. Negotiations appear ongoing, with potential integration into their security stack if approved.
Mythos can autonomously discover thousands of zero-day vulnerabilities including in major OSes, browsers, and long-ignored flaws that traditional tools miss. This allows Coinbase to pentest its infrastructure, wallets, smart contracts, and custody systems at unprecedented scale and speed, potentially reducing the risk of hacks or exploits.
AI like Mythos boosts both offense and defense. While it helps defenders like Coinbase stay ahead, it also signals that AI-powered attacks could become faster, more sophisticated, and accessible to non-state actors—raising the overall threat level for crypto exchanges handling billions in assets.
Under Project Glasswing, access remains tightly controlled with monitoring and mitigations to prevent misuse. Coinbase and peers like Binance and Fireblocks gain a defensive edge without broad public release of the high-risk model. Success could pressure other crypto firms and financial institutions to adopt similar AI tools.
It may influence regulatory conversations around AI in critical infrastructure, while highlighting how frontier models are reshaping security priorities beyond traditional methods. In short, it’s a pragmatic move in an arms race where AI is both the biggest new risk and the best new shield for critical infrastructure like crypto platforms. Details on any final agreement are still emerging.



