On Jan. 27, the tech-heavy Nasdaq index fund took a sudden 3.1 percent dive, with Nvidia, a company at the heart of the AI revolution, seeing its 17 percent drop shatter records for the largest single-day market cap loss in Wall Street history.
Just a week prior to the crash, the global artificial intelligence race appeared to be firmly in America’s grip. President Donald Trump had just announced a $500 billion investment to fund the building of new AI infrastructure, dubbed “Project Stargate.” It seemed as if this was the final move that showcased technical prowess and ensured the United States’ AI dominance. What happened?
In just nine months since it was founded in July 2023, Chinese startup DeepSeek had achieved what took many Silicon Valley giants years of relentless research and development. Its cutting-edge model, R1, disrupted the AI landscape and overshadowed big names like ChatGPT, Llama, and Gemini. As a result, investors scrambled to reassess their positions as the stunning sell-off highlighted a new reality: The AI landscape was no longer a Western monopoly.
“I do know that [the news] was mainly talking about the massive cost differences between GPT and DeepSeek,” said computer science teacher Bobby Bryant.
Unlike ChatGPT, which relies on a large number of imported Nvidia graphic processing units (GPUs), DeepSeek optimizes performance with a Mixture-of-Experts (MoE) model. This model selectively activates only the necessary computing resources rather than running an entire neural network at full capacity, saving the company money.
“[The United States] has to import all of our graphics cards from [different] places,” said Bryant, “whereas [China] has easier access to both manufacture them or source them more locally than we could.”
This combined design of lower infrastructure costs and model training allows DeepSeek to run at a much lower price with comparable quality. For example, DeepSeek’s free V3 model can process one million input tokens and one million output tokens for $0.42, compared to $90 with ChatGPT’s 3.5-Turbo free model.
This low-cost model quickly attracted individual developers, startups, and enterprises seeking to cut AI expenses without compromising quality. Within hours of its launch, DeepSeek “dethroned” rival ChatGPT’s spot as the most downloaded free app on Apple’s App Store.
Tech enthusiasts eagerly tested to see whether its performance matched its claims about efficiency and accuracy.
“I think that’s the question–have they actually perfected a method where you can get a model that’s comparable to ChatGPT 3.5 or maybe even 4.0, but do it for a fraction of the price?” said Innovation Fellows advisor Stephen Addcox.
According to Ars Technica, whose researchers conducted tests comparing the qualities of the two leading models, DeepSeek’s R1 was indeed “competitive with the best paid models from OpenAI.”
Despite DeepSeek’s capability and affordability, users have reported issues with reliability.
“A very frustrating part is that you can’t continue a conversation very far until it says its servers are busy, and then you have to restart,” said junior Enpeng Jiang.
Nonetheless, DeepSeek’s rapid rise was not solely about budget-friendly hardware or capable architecture. While companies like OpenAI guard their code and data pipelines behind proprietary licenses, DeepSeek took a different route, publicly releasing large portions of its model and training process for anyone to scrutinize, adapt, and improve.
“All of a sudden someone comes out and it’s like, actually, you can do this for a lot cheaper, and we’re releasing all the tools you need to do it yourself,” said computer science teacher Mitchell Griest. “That was pretty cool for the general populace, including myself.”
Tech experts credit this openness with jumpstarting a new wave of AI innovation. Freed from subscription models and high-end computing demands, local developers began fine-tuning R1 for specialized tasks, like furthering efficiencies of medical text analysis to real-time financial reporting. Some even predict that DeepSeek’s open-source philosophy may challenge not just OpenAI but also industry norms that often favor closed-door research.
“A lot of the AI models were going towards more computing power, more money, more investment…and then DeepSeek kind of reversed the trend,” said Jiang.
However, whether DeepSeek’s techniques can be consistently replicated and sustained without compromising quality or ethical standards remains an open question.
Following the release of DeepSeek, regulators have begun paying attention. In the United States, a bipartisan group of lawmakers recently requested an investigation into DeepSeek’s data sourcing and privacy practices. The European Union likewise weighs how open-source models fit into its regulatory framework. Meanwhile, China’s regulatory climate faces its own troubles over content filtering and governmental access to user data.
But beyond government investigations, DeepSeek and ChatGPT’s rivalry is not just a battle of cost and performance–it’s a battle of perception.
“ChatGPT is here, and DeepSeek is there,” said Bryant. “And that’s going to be the way that it gets spun, and people aren’t going to look into the fact that ChatGPT, you have to use their servers for it. DeepSeek, you can download and run on your own. [I’ve] had to tell my mom and my sister, I’m like, ‘you’ve been using Siri for years, and you’ve been fine with it, but that takes everything from [your] conversation and sends it to someone else’s servers.’”
The intensifying rivalry between DeepSeek and ChatGPT has already begun to reshape the AI climate. Microsoft continues to integrate ChatGPT’s capabilities more deeply into Office 365 and Azure, Google has accelerated the release of Gemini in hopes of outmaneuvering both DeepSeek and ChatGPT in future product rollouts, and ChatGPT has started offering free trials for its o3-mini model. AI is no longer a race dominated by a single region or company but rather a global competition where speed, cost, and versatility can instantly shift market leadership.
Looking ahead, this relentless drive to outpace rivals may have unintended consequences. As AI systems become more advanced, focusing on rapid development over extensive safety checks could lead to powerful but inadequate models for bias, security gaps, and reliability.
“Large language models need vast amounts of text, and [a] way to get that is by feeding it more human-generated content,” said Addcox. “The problem is, the internet itself is becoming increasingly filled with AI-generated text.”
With DeepSeek and ChatGPT locked in a competitive sprint, the unchecked accumulation of AI-generated content, data privacy concerns, and the erosion of human-created material could become significant pitfalls in the future of AI development.
“If we do end up someday creating a superhuman intelligence… we have to get it right every single time,” said Griest. “And the idea is, once we do this once, we’ll probably do it many, many times. And if you get it wrong enough one time, it could have really devastating effects.”
Edited by Ayan Chaganthi