Hangzhou-based startup DeepSeek has doubled down on its research momentum, unveiling two new versions of its experimental artificial-intelligence model, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale.
The release signals that China’s influential AI labs are continuing to push the frontier of open-source systems, maintaining performance metrics competitive with Silicon Valley’s cutting-edge proprietary models like OpenAI’s GPT-5 and Google’s Gemini-3 Pro.
The launch comes shortly after the company’s experimental release in September, dubbed DeepSeek-V3.2-Exp. The new V3.2 models focus on deepening two key areas: integrated reasoning with tool use and specialized mathematical and logical problem-solving.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The standard DeepSeek-V3.2 model, now available on DeepSeek’s platforms and APIs, focuses on achieving a breakthrough in agentic capabilities—the ability of AI to act autonomously to achieve goals.
The core innovation is the new approach to combining human-like reasoning with practical execution. DeepSeek-V3.2 is the company’s “first model to integrate thinking directly into tool-use,” supporting the use of external resources like search engines, calculators, and code executors.
The model offers two distinct operational modes:
- Thinking Mode (accessible via the deepseek-reasoner model name): The model outputs a chain-of-thought (CoT) reasoning process before delivering the final answer, enhancing accuracy on complex tasks.
- Non-Thinking Mode (accessible via the deepseek-chat model name): Provides a fast, direct final response.
The startup claims that the new standard service matches the performance of OpenAI’s flagship GPT-5 across multiple reasoning benchmarks and achieves a seamless blend of logical inference with real-world tool execution.
The second release, DeepSeek-V3.2-Speciale, is a high-compute variant designed to “push the inference capabilities of open-source models to their limits.” This model focuses primarily on achieving maximum reasoning and long-thinking capabilities, particularly in academic and complex logical fields.
Benchmarking Giants: DeepSeek claims the V3.2-Speciale version matches the performance of Google’s latest Gemini-3 Pro and, in some benchmarks like the American Invitational Math Examination (AIME) and software development tasks (SWE Multilingual), it even surpasses GPT-5.
Gold-Medal Performance: The Speciale model demonstrated gold-medal level performance on standardized tests requiring complex problem-solving, such as the International Math Olympiad (IMO) and the International Olympiad on Informatics (IOI).
However, the pursuit of maximum reasoning comes with a caveat: the Speciale variant consumes significantly more tokens (e.g., 77,000 tokens for Codeforces problems, compared to Gemini’s 22,000) and is currently API-only, prioritizing depth over the cost-efficiency of the standard V3.2 model.
Technical Foundations and Market Impact
DeepSeek’s rapid innovation builds on three key technological breakthroughs mentioned in their technical report, DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models:
- DeepSeek Sparse Attention (DSA): This redesigned attention architecture optimizes computational costs and significantly speeds up processing for long inputs (up to 128,000 tokens) without sacrificing output quality.
- Scalable Reinforcement Learning (RL) Framework: A massive scale-up in the post-training alignment phase to enhance overall capability.
- Agentic Task Synthesis Pipeline: A new method for training AI agents by creating thousands of executable scenarios based on real-world problems (like GitHub issues).
This release, which follows the company’s breakthrough model in January 2025, solidifies DeepSeek’s role as a major disruptor in the global AI race, particularly in the open-source community, by offering frontier-level performance at competitive costs. Just last week, the company released DeepSeekMath-V2, an open model with strong theorem-proving capabilities, underscoring its relentless research pace.



