Home Latest Insights | News DeepSeek Quietly Releases Upgraded Model—DeepSeek-R1-0528

DeepSeek Quietly Releases Upgraded Model—DeepSeek-R1-0528

DeepSeek Quietly Releases Upgraded Model—DeepSeek-R1-0528

Chinese artificial intelligence startup DeepSeek has quietly released an upgraded version of its open-source reasoning model, DeepSeek-R1-0528, further shaking the global AI ecosystem that has already been disrupted by the company’s surprising ascent.

While the model boasts stronger reasoning capabilities and reduced hallucinations, researchers and developers say it also exhibits signs of deeper censorship, a concern many believe could stunt its global adoption.

The updated version was released without fanfare on the AI model repository Hugging Face, mirroring the company’s approach with its first model. Despite the low-key debut, the new model is loud on impact, now trailing OpenAI’s o3 and o4-mini models on LiveCodeBench, a competitive benchmark site that ranks large language models (LLMs) based on their performance in reasoning and code generation tasks. It also places ahead of Google’s Gemini 2.5 Pro, another heavyweight in the generative AI race.

Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.

Tekedia AI in Business Masterclass opens registrations.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register to become a better CEO or Director with Tekedia CEO & Director Program.

In an interview with CNBC, Adina Yakefu, an AI researcher at Hugging Face, said, “DeepSeek’s latest upgrade is sharper on reasoning, stronger on math and code, and closing in on top-tier models like Gemini and o3.” She added that the model shows “major improvements in inference and hallucination reduction,” emphasizing that, “this version shows DeepSeek is not just catching up, it’s competing.”

However, the model is reportedly more censored than its predecessor. Developers using the model observed that prompts related to politically sensitive topics, especially in the Chinese context, tend to trigger restrictions or deflections. This issue, although expected from a model developed in China, is seen as a significant drawback in the global AI community, where open-ended reasoning and unfiltered information access are critical features.

A detailed technical review shared on Hugging Face forums noted that the model “refuses to answer prompts on geopolitics, governance, or controversial historical events,” even when phrased academically. Some note that although it’s a solid coder and great at math, the walls come up fast when you venture into anything sensitive.

This has raised questions about how scalable DeepSeek’s models might be in international enterprise environments or research contexts, especially compared to less restricted open-source rivals like Meta’s LLaMA 3 or Mistral models.

DeepSeek’s quiet rise began earlier this year when its initial R1 reasoning model shocked the AI world by outperforming Meta and OpenAI in certain logic-heavy benchmarks. The model’s low development cost and quick turnaround time rattled investor confidence in American AI firms, including Nvidia, temporarily wiping billions from tech market valuations. While markets have since recovered, the psychological impact lingers—particularly as U.S. tech firms reevaluate their infrastructure spending strategies.

The controversy also re-ignited debates about whether heavy investments by firms like OpenAI, Google, and Anthropic are truly yielding proportional performance benefits—or just ballooning costs. The U.S. government’s export curbs on advanced chips to China have attempted to slow the country’s AI progress, but with models like DeepSeek-R1-0528, the effectiveness of those policies is increasingly under scrutiny.

Jensen Huang, CEO of Nvidia, publicly addressed this during an investor event last week, noting: “The U.S. has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it’s clearly wrong. The question is not whether China will have AI. It already does.”

Currently, DeepSeek remains a potent symbol of China’s accelerating AI ambitions, and the release of R1-0528 cements the company’s place among global LLM leaders. However, unless it addresses the issue of restrictive filtering, it may find itself boxed into specific markets or limited in its appeal to global researchers and developers looking for transparency and flexibility in AI systems.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here