Home » DeepSeek R1: Why AI experts think it’s so special

DeepSeek R1: Why AI experts think it’s so special

by Anna Avery


All of a sudden, DeepSeek is everywhere.

Its R1 model is open source, allegedly trained for a fraction of the cost of other AI models, and is just as good, if not better than ChatGPT.

This lethal combination hit Wall Street hard, causing tech stocks to tumble, and making investors question how much money is needed to develop good AI models. DeepSeek engineers claim R1 was trained on 2,788 GPUs which cost around $6 million, compared to OpenAI’s GPT-4 which reportedly cost $100 million to train.

DeepSeek’s cost efficiency also challenges the idea that larger models and more data leads to better performance. Amidst the frenzied conversation about DeepSeek’s capabilities, its threat to AI companies like OpenAI, and spooked investors, it can be hard to make sense of what’s going on. But AI experts with veteran experience have weighed in with valuable perspectives.

DeepSeek proves what AI experts have been saying for years: bigger isn’t better

Hampered by trade restrictions and access to Nvidia GPUs, China-based DeepSeek had to get creative in developing and training R1. That they were able to accomplish this feat for only $6 million (which isn’t a lot of money in AI terms) was a revelation to investors.

But AI experts weren’t surprised. “At Google, I asked why they were fixated on building THE LARGEST model. Why are you going for size? What function are you trying to achieve? Why is the thing you were upset about that you didn’t have THE LARGEST model? They responded by firing me,” posted Timnit Gebru, who was famously terminated from Google for calling out AI bias, on X.

Mashable Light Speed

Hugging Face‘s climate and AI lead Sasha Luccioni pointed out how AI investment is precariously built on marketing and hype. “It’s wild that hinting that a single (high-performing) LLM is able to achieve that performance without brute-forcing the shit out of thousands of GPUs is enough to cause this,” said Luccioni.

Clarifying why DeepSeek R1 is such a big deal

DeepSeek R1 performed comparably to OpenAI o1 model on key benchmarks. It marginally surpassed, equaled, or fell just below o1 on math, coding, and general knowledge tests. That’s to say, there are other models out there, like Anthropic Claude, Google Gemini, and Meta’s open source model Llama that are just as capable to the average user.

But R1 causing such a frenzy because of how little it cost to make. “It’s not smarter than earlier models, just trained more cheaply,” said AI research scientist Gary Marcus.

The fact that DeepSeek was able to build a model that competes with OpenAI’s models is pretty remarkable. Andrej Karpathy who co-founded OpenAI, posted on X, “Does this mean you don’t need large GPU clusters for frontier LLMs? No, but you have to ensure that you’re not wasteful with what you have, and this looks like a nice demonstration that there’s still a lot to get through with both data and algorithms.”

Wharton AI professor Ethan Mollick said it’s not about it’s capabilities, but models that people currently have access to. “DeepSeek is a really good model, but it is not generally a better model than o1 or Claude” he said. “But since it is both free and getting a ton of attention, I think a lot of people who were using free ‘mini’ models are being exposed to what a early 2025 reasoner AI can do and are surprised.”

Score one for open source AI models

DeepSeek R1 breakout is a huge win for open source proponents who argue that democratizing access to powerful AI models, ensures transparency, innovation, and healthy competition. “To people who think ‘China is surpassing the U.S. in AI,’ the correct thought is ‘open source models are surpassing closed ones,'” said Yann LeCun, chief AI scientist at Meta, which has supported open sourcing with its own Llama models.

Computer scientist and AI expert Andrew Ng didn’t explicitly mention the significance of R1 being an open source model, but highlighted how the DeepSeek disruption is a boon for developers, since it allows access that is otherwise gatekept by Big Tech.

“Today’s ‘DeepSeek selloff’ in the stock market — attributed to DeepSeek V3/R1 disrupting the tech ecosystem — is another sign that the application layer is a great place to be,” said Ng. “The foundation model layer being hyper-competitive is great for people building applications.”





Source link

Related Posts

Leave a Comment