DeepSeek has unveiled its upgraded DeepSeek-R1-0528 model, demonstrating 【comparable capabilities】 to industry leaders OpenAI's ChatGPT and Google's Gemini 2.5 Pro. The Beijing-based company reports 38 million monthly active users as of April, though still trailing behind Gemini's 350 million and ChatGPT's 600 million.
The new model shows 【40% reduction】 in hallucination rates while improving logic and programming capabilities. Remarkably, DeepSeek achieved this with a $6 million training budget—a fraction of what major competitors typically spend. Performance benchmarks across six key metrics place it within 5% of top-tier models in reasoning and inference tasks.
This release comes as the US prepares new restrictions on advanced chip design software exports to China. Industry analysts note these semiconductors form the hardware foundation for training cutting-edge AI models like DeepSeek's. Meanwhile, Chinese tech giants Tencent and Alibaba have also launched competing models (T1 and Qwen3) in early 2025.
——Our approach democratizes access to high-performance AI—— a DeepSeek spokesperson stated, referencing the company's open-source releases. Since its January debut, DeepSeek's technology has been downloaded 【75 million times】, establishing China as a serious contender in the global AI race.
Interestingly, while Western models maintain user number advantages, DeepSeek's cost-efficient training methods could reshape industry economics. The company plans to focus next on multimodal capabilities, potentially closing another gap with market leaders by year's end.