Chinese startup DeepSeek has launched its V4 AI model with significantly reduced costs and ultra-long context capabilities, as competition in the global AI race between China and the US heats up.
The Hangzhou-based firm said the latest model delivers major improvements, including an ultra-long context window capable of handling up to one million words, positioning it as a competitive force in the global AI landscape.
DeepSeek made the announcement via social media platforms, stating that V4 is “world-leading… with drastically reduced compute (and) memory costs,” while highlighting its advancements in reasoning, knowledge processing, and agent capabilities.
The release comes amid growing competition between China and the United States in artificial intelligence development. The White House recently accused Chinese entities of attempting to appropriate American AI technologies, underscoring rising geopolitical tensions in the sector.
DeepSeek first gained global attention in January 2025 with its generative AI chatbot powered by the R1 reasoning model, which challenged assumptions about US dominance by delivering comparable performance at lower cost.
The company said its new model sets a benchmark in both domestic and open-source AI ecosystems, particularly in handling complex tasks requiring extended context.
A preview version of DeepSeek-V4 has already been made available as open source, continuing the company’s strategy of transparency, which contrasts with the proprietary systems developed by Western firms such as OpenAI.
Industry analysts say the launch could mark a turning point in AI development economics. Zhang Yi, founder of tech research firm iiMedia, described the release as a significant breakthrough.
“This addresses the long-standing issues of slower performance and higher costs associated with long context lengths, marking a genuine inflection point for the industry,” Zhang said.
“For end users, this will bring widespread, accessible benefits. For instance, if ultra-long context support becomes a standard feature, long-text processing is expected to move beyond high-end research labs and enter mainstream commercial applications.”
DeepSeek-V4 is available in two variants: V4-Pro and V4-Flash. While the Pro version boasts 1.6 trillion parameters for advanced reasoning, the Flash variant—with 284 billion parameters—is designed to offer a more efficient and cost-effective alternative.
The company added that the model has been optimised for integration with AI agent platforms such as Claude Code, OpenClaw, OpenCode, and CodeBuddy.
In performance benchmarks, DeepSeek claimed that V4-Pro surpasses most open-source competitors and is only marginally behind leading proprietary models like Gemini-Pro-3.1.
The company’s earlier breakthrough, often described as the “DeepSeek shock,” triggered market reactions and forced a reassessment of AI business strategies globally, with some analysts likening it to a “Sputnik moment” for the industry.
However, the firm has faced scrutiny over data privacy and censorship concerns, particularly regarding its chatbot’s handling of politically sensitive topics.
Despite these concerns, DeepSeek’s AI systems have seen widespread adoption across China, including in government services, healthcare, and finance, driven in part by its open-source approach.
Meanwhile, developments in the AI sector continue to ripple globally, with companies like Meta and Microsoft reportedly considering workforce reductions while ramping up investment in artificial intelligence.
The launch of DeepSeek-V4 signals a new phase in the global AI race, as companies compete to deliver more powerful systems at lower cost while navigating geopolitical and regulatory challenges.

