In a CNBC Squawk Box interview, Trump stated that Anthropic representatives visited the White House recently for very good talks, adding:
And I think they’re shaping up. They’re very smart, and I think they can be of great use. I like smart people… I think we’ll get along with them just fine. When asked if a Pentagon deal was possible, he replied, It’s possible. We want the smartest people.
The dispute began earlier in 2026 when the Trump administration directed federal agencies to stop using Anthropic’s technology primarily its Claude models. The Pentagon then designated Anthropic a supply-chain risk, a label typically used for foreign threats, which restricted its use in defense contracting. This stemmed from clashes over AI guardrails
Anthropic pushed for restrictions on certain military applications, such as fully autonomous weapons and mass domestic surveillance of Americans even though existing U.S. law and Pentagon policies already prohibit some of these. The company has long emphasized Constitutional AI and safety principles, with CEO Dario Amodei highlighting risks from advanced AI lowering barriers to destructive acts.
Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).
Register for Tekedia AI in Business Masterclass.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab.
The Pentagon insisted on unrestricted access for all lawful purposes and viewed corporate-imposed limits as unacceptable interference in military operations. Anthropic sued to block the blacklist, with mixed court results: a California judge issued a temporary halt, but the D.C. Circuit allowed the designation to stand while expediting review. The blacklist was a significant blow, potentially costing the company billions and affecting partners like Amazon and Microsoft in defense work.
Trump’s remarks signal a potential reversal following the White House meeting described by officials as productive and constructive. A key catalyst appears to be Anthropic’s recent Mythos model, noted for strong capabilities in cybersecurity vulnerability detection and exploitation—areas of high value for national security and defense.
Prediction markets like Polymarket have shown rising odds around 60-76% in recent snapshots of a deal with the Pentagon by June 30, 2026. In the meantime, reports indicate the Pentagon shifted some work to alternatives like OpenAI. This fits a pattern of the administration prioritizing rapid AI acceleration and U.S. technological edge especially vs. competitors like China over stringent private-sector safety constraints on military use.
Anthropic’s shaping up comment suggests pragmatic negotiations: the company may need to relax or clarify some guardrails to regain access, while still emphasizing safety and cybersecurity collaboration. It’s a classic tension in AI governance—who sets the rules for dual-use technology: companies with self-imposed ethical frameworks, or the government and military demanding flexibility for defense needs.
Claude is known for its helpfulness, reduced hallucination tendencies via techniques like Constitutional AI, strong performance in coding, reasoning, vision, and agentic workflows, and a focus on being steerable and aligned with human values. As of 2026, recent releases include Claude Opus 4.7; strong in coding, agents, vision, and complex tasks and tools like Claude Code, Claude Cowork, Claude Design, and agent features.
Claude powers millions of users and has gained significant traction in enterprise and developer workflows. The company also conducts research in areas like scaling laws, interpretability, reinforcement learning from human feedback (RLHF), and economic impacts of AI.
Trump’s smart people framing prioritizes capability and utility, which aligns with his broader push for American AI dominance. The situation remains fluid—lawsuits are ongoing, and any final deal would likely involve detailed agreements on usage. If it resolves, it could ease pressure on Anthropic while giving the DoD broader access to frontier models.



