Nvidia is the king of AI chips... How long will this continue?

The meteoric rise that once made Nvidia the world’s most valuable company has stalled, with investors wary of pouring more money into the chipmaker as it becomes clear that the adoption of AI computing won’t be a straight path or rely solely on Nvidia technology.
For now, Nvidia remains the leading vendor in the AI gold rush, with revenues still rising and an order book swollen for the company’s Hopper line of chips and its successor, Blackwell.
The company’s continued success will depend on whether Microsoft, Google and other tech giants can find enough commercial uses for AI to make a profit from the massive investments they’ve made in Nvidia chips. Even if they do, it’s unclear how many of the company’s most powerful and profitable chips will be needed.
In January, Chinese startup DeepSeek released an AI model that it said performed as well as those produced by major U.S. companies but required far fewer resources to develop.
After DeepSeek published a paper detailing the new model’s capabilities and how it was created, Nvidia’s market cap fell by $589 billion in a single day, the biggest drop in stock market history. It rebounded in the weeks that followed, but by the end of February it was still below its year-early level. Here’s a look at what’s driving Nvidia’s phenomenal growth, and the challenges ahead.
WHAT ARE NVIDIA’S MOST POPULAR AI CHIPS? The current top-tier offering is the Hopper H100, named after computer science pioneer Grace Hopper. It’s a more powerful version of a graphics processing unit that first appeared in personal computers used by video gamers. The Hopper is being replaced at the top of the line by the Blackwell series, named after mathematician David Blackwell.
Both Hopper and Blackwell feature technology that turns clusters of computers using Nvidia chips into single units that can process large volumes of data and perform calculations at high speeds, making them perfect for the power-intensive task of training the neural networks that underpin the latest generation of AI products .
Founded in 1993, Nvidia pioneered the market more than a decade ago with investments it made on the belief that parallel processing capabilities would one day make its chips valuable in applications beyond gaming. The Santa Clara, California-based company will sell Blackwell with a variety of options, including part of the GB200 superchip, which combines two Blackwell GPUs with a Grace CPU, a general-purpose central processing unit. (The Grace CPU is also named after Grace Hopper.)
WHY ARE NVIDIA'S AI CHIPS SPECIAL?
Generative AI platforms learn tasks like translating text, summarizing reports, and synthesizing images by digesting vast amounts of preexisting material, and the more they see, the better they perform. Such platforms evolve through trial and error, running billions of trials to reach proficiency and consuming vast amounts of computing power in the process. Blackwell offers 2.5 times the performance of Hopper in AI training, according to Nvidia. The new design has so many transistors—tiny switches that give semiconductors their ability to process information—that it cannot traditionally be built as a single unit. The two chips are actually connected by a connection that allows them to act seamlessly as a single chip, the company says.
The performance advantage offered by the Hopper and Blackwell chips is critical for customers racing to train their AI platforms to perform new tasks. The components are seen as so crucial to the development of AI that the U.S. government has restricted their sale to China.
HOW NVIDIA BECAME A LEADER IN AI? Nvidia was already the king of graphics chips, the components that create the images you see on your computer screen. The most powerful of these were built with thousands of processing cores that performed multiple computational tasks simultaneously, allowing them to produce the complex 3D renderings, such as shadows and reflections, that are a feature of today’s video games.
Nvidia engineers realized in the early 2000s that they could repurpose these graphics accelerators for other applications, while AI researchers discovered that their work could eventually be made practical using this type of chip .
WHAT ARE NVIDIA’S COMPETITORS DOING? Nvidia currently controls about 90 percent of the market for data center GPUs, according to market research firm IDC. Dominant cloud computing providers such as Amazon.com Inc’s AWS, Alphabet Inc’s Google Cloud and Microsoft’s Azure are trying to develop their own chips, as are Nvidia rivals Advanced Micro Devices Inc. and Intel Corp. Those efforts have so far done little to erode Nvidia’s dominance.
AMD, which is seen as having the best chance of slicing into Nvidia’s lead, said in January that sales in the first half of the year would be flat compared with the previous six months. Sales would then rise in the second half of 2025, when the company launches a new chip. AMD declined to give an annual revenue target for this year, leading to speculation that it was struggling to gain momentum.
HOW NVIDIA STANDS OUT AHEAD OF ITS COMPETITORS: Nvidia has updated its products, including the software that supports the hardware, at a pace no other company has yet matched. The company has also designed cluster systems that help customers buy H100s in bulk and deploy them quickly.
Chips like Intel’s Xeon processors are capable of complex data processing, but they have fewer cores and are slower at working through the mountains of information typically used to train artificial intelligence software. Intel, once the dominant provider of data center components, has so far struggled to deliver accelerators that customers are willing to trade for Nvidia equipment.
Where’s AI chip demand? Nvidia Chief Executive Jensen Huang and his team have repeatedly said the company has more orders than it can fill, even for older models. When the company reports earnings today, investors will expect it to reiterate that assurance. Microsoft, Amazon, Meta, and Google have all announced plans to collectively spend hundreds of billions of dollars on AI and the data centers that will support it. There’s been speculation lately that the AI data center boom is already slowing down.
Microsoft has cancelled some leases for US data centre capacity, raising wider concerns about whether it has secured more AI computing capacity than it needs in the long term, according to investment bank TD Cowen.
WHY IS CHINESE STARTUP DEEPSEEK CAUSING SO MUCH CONCERNS? The release of DeepSeek’s new R1 open-source AI model has left rivals wondering how it achieves results on par with its U.S. rivals while using a fraction of the resources it uses. DeepSeek fine-tunes its AI model with real-world inputs using an approach known as inference, which is less time-consuming and data-intensive than the artificial intelligence training method used by other companies. Nvidia, the company that arguably has the most to lose, has called DeepSeek’s model “a brilliant advancement in AI” and achieved it without violating U.S. technology export controls. Those restrictions ban the export of Nvidia’s most advanced GPUs to China, so Nvidia’s response has allayed skepticism among some industry analysts that the Chinese startup won’t make the breakthrough it claims.
Still, Nvidia said its chips will play a key role even as there is a shift in how AI models are built. “Inference requires a significant number of Nvidia GPUs and high-performance networking,” the company said.
HOW DO AMD AND INTEL COMPARE TO NVIDIA IN AI CHIP? AMD, the second-largest maker of computer graphics chips, has introduced a version of its Instinct line in 2023 aimed at a market dominated by Nvidia’s products. A new and improved version called the MI350 will ship to customers in the middle of the year. It will perform 35 times better than its predecessor, AMD Chief Executive Officer Lisa Su said.
The company predicts revenue in the first six months of 2025 will be about the same as the previous six months. While AMD currently generates more than $5 billion annually from accelerator chips that help develop AI models, Nvidia’s sales in that category exceed $100 billion annually.
Last month, Intel management told analysts and investors that the company was “not meaningfully participating in the cloud-based AI data center market.” The company will not release the chip as planned after failing to get positive feedback from potential customers on a chip codenamed Falcon Shores, and will use it only for internal testing. The comments by interim CEO Michelle Johnston Holthaus suggested the company was further behind in the race to catch up with Nvidia than feared, and contrasted with earlier bullish claims made by outgoing CEO Pat Gelsinger.
But none of Nvidia’s rivals have yet accounted for the leap forward that Blackwell says the company will deliver. Nvidia’s advantage isn’t just in the performance of its hardware. The company invented something called CUDA, a language for graphics chips that allows them to be programmed for the kind of work that underlies artificial intelligence applications. The widespread use of that software tool has helped keep the industry hooked on Nvidia’s hardware.
hurriyet