China’s DeepSeek-V3 AI Model Goes Against GPT-4o, Claude Sonnet
On December 26, 2024, Chinese start-up DeepSeek launched its V3 model, a DeepSeek AI model rivaling OpenAI’s ChatGPT and Google’s Gemini, developed with $6 million in computing resources.
The DeepSeek new model is a paradigm shift in AI innovation, revealing how smaller players can challenge resource-heavy Silicon Valley’s tech leaders through ingenuity. DeepSeek-V3 AI model will effectively innovate under constrained circumstances, setting the stage for a deeper discussion on resourcefulness in the AI sector.
With President Trump taking the reins of power from his predecessor, Washington and Beijing will be pushing their rivalry into the generative AI field, where China’s DeepSeek’s achievement will put it on the same level as the US’ OpenAI and Google, where efficiency, collaboration, and geopolitical rivalry will dictate the level of innovation during Trump’s second term.
The DeepSeek AI research on its DeepSeek-V3 LLM birthed a model that boasts 671 billion parameters and was trained in approximately 55 days at a cost of $5.58 million – far less than its Amercian rivals. The very same research by the Chinese company led to the creation of an intelligent model that matches the performance of GPT-4o and Antrhropic’s Clauda 3.5 Sonnet, according to the Financial Times.
Open Source Takes the Lead
The DeepSeek model’s true value is its use of open-sourced technologies. By open sourcing its code, DeepSeek allows developers and researchers worldwide to build upon it – similar to Meta releasing its Llama AI model in 2023, catalyzing open-source innovation across the AI space for the DeepSeek AI model and other models.
“In my view, for the open-source community, the center of gravity is shifting to China,” said Ion Stoica, a professor of computer science at UC Berkeley. Stoica warns that if the US clamps down on open-source innovations, it risks ceding leadership in the development of AI to China.
The DeepSeek chat success also points, however, to a new era in the development of AI. The small teams and start-ups showcase that innovation is no longer the domain of companies with billion-dollar budgets. Reuven Cohen, based in Toronto and a technology consultant, termed DeepSeek-V3 “the kind of technology someone like me would want to use-it’s affordable and actually effective.”.
The DeepSeek reasoningachievement, as the race for AI heats up, shows how resourcefulness and open collaboration can disrupt an industry dominated by tech giants.
DeepSeek Company Against US Tech Giants
DeepSeek AI model was developed by engineers on a modest budget, far below the hundreds of millions spent by major US tech giants. Unlike Meta supercomputers with 16,000 specialized chips, DeepSeek used just 2,000 Nvidia chips for training.
The achievement came as US trade restrictions are potentially on their way to limit China’s access to advanced chips out of concern about national security. But the DeepSeek training bargain-basement approach raises questions about the unintended consequences of such controls.
“The constraints on chips in China forced the DeepSeek engineers to train it more efficiently so it could still be competitive,” said Jeffrey Ding, an assistant professor at George Washington University.
DeepSeek-V3 chatbothas solved everything from logical problems to answering complex questions and even writing computer code. The DeepSeek vs ChatGPT performance is comparable to one another but at a fraction of the cost-a proof that cutting-edge AI is not a preserve of the biggest tech companies.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.