MiniMax, the fast-rising Chinese AI startup often dubbed one of the “AI Tigers,” has launched MiniMax M2.1, a significant upgrade to its sparse Mixture-of-Experts (MoE) model series.
The release, announced December 22, emphasizes real-world complex tasks, positioning M2.1 as a state-of-the-art open-source contender for coding, agent scaffolding, and enterprise automation—delivering performance that rivals or exceeds closed-source leaders like Claude Sonnet 4.5 and Gemini 3 Pro in key areas.
Founded in December 2021 by alumni from computer vision giant SenseTime (including CEO Yan Junjie), MiniMax has grown explosively, raising over $850 million across rounds, with a $600 million infusion in March 2024 led by Alibaba, pushing its valuation to $2.5-3 billion. Additional backers include Tencent, HongShan (formerly Sequoia China), and MiHoYo.
Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026): big discounts for early bird.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register for Tekedia AI Lab: From Technical Design to Deployment (next edition begins Jan 24 2026).
The company, which confidentially filed for a Hong Kong IPO targeting up to $700 million at an over $4 billion valuation, boasts 27.6 million monthly active users (as of September 2025) across consumer apps like Hailuo AI (text-to-video), Talkie (AI companions), and its Open Platform API. M2.1 builds on the October-launched M2—a 230B total / 10B active parameter MoE that topped open-source rankings on Artificial Analysis composites—by prioritizing usability in multilingual programming, native mobile development, office scenarios, and agent generalization.
Retaining the efficient architecture for low-latency inference (~100+ tokens/second on optimized setups), M2.1 offers API pricing at roughly 8-10% of Claude Sonnet while claiming 2x speed. Benchmark Highlights (independent and MiniMax-reported):Multi-SWE-Bench: 49.4% — industry-leading for multilingual tasks.
- SWE-Bench Multilingual: 72.5% — outperforming Claude Sonnet 4.5.
- SWE-Bench Verified: Up to 74.0% in agent frameworks (edging DeepSeek V3.2’s 73.1%).
- VIBE (Visual & Interactive Benchmark for Execution): Aggregate 88.6% (new open-sourced benchmark using Agent-as-Verifier); standout 91.5% on VIBE-Web and 89.7% on VIBE-Android, surpassing Claude Opus/Sonnet in full-stack app generation with aesthetic and functional excellence.
Other gains include refined interleaved thinking for composite instructions, concise Chain-of-Thought outputs reducing token usage, and stable integration with tools like Claude Code, Droid, Cline, Kilo Code, Roo Code, BlackBox, plus context mechanisms (Skill.md, agent.md, Slash Commands). Beyond coding, M2.1 elevates general dialogue, technical writing, and non-technical responses with more structured, detailed outputs.
The model is immediately accessible via MiniMax’s API (text generation endpoint), integrated platforms (Kilo Code, Vercel AI Gateway, Ollama), and open weights on Hugging Face (MiniMaxAI/MiniMax-M2.1).
Recommended inference: vLLM/SGLang with temperature=1.0, top_p=0.95.
Community response has been electric: On X and Reddit’s r/LocalLLaMA, developers hailed M2.1 as a “beast at UI/UX design” with “clean” app prototypes in a few interactions, faster tool calling, and superior “vibe coding.”
Early tests show strong long-horizon reasoning and reduced bugs versus M2. Comparisons position it ahead of DeepSeek V3.2 and GLM 4.7 in aesthetics/mobile, while closing gaps with proprietary frontiers. MiniMax frames M2.1 as the “brain” for the agentic era, powering its MiniMax Agent platform for end-to-end tasks (administration, data science, finance, HR, software dev).
M2.1 accelerates democratization—offering elite coding/agent capabilities at an accessible scale as open-source Chinese labs (MiniMax, DeepSeek, Zhipu) dominate 2025 releases, challenging global incumbents and fueling AI-native workflows worldwide.



