Nvidia continues to ride the AI wave as the company sees unprecedented demand for its latest Blackwell GPU next-generation processors. The supply for the next 12 months is sold out, Nvidia CEO Jensen Huang told Morgan Stanley analysts during an investors’ meeting.
A similar situation occurred with Hopper GPUs several quarters ago, Morgan Stanley Analyst Joe Moore pointed out.
Nvidia’s traditional customers are driving the overwhelming demand for Blackwell GPUs, including major tech giants such as AWS, Google, Meta, Microsoft, Oracle, and CoreWeave. Every Blackwell GPU that Nvidia and its manufacturing partner TSMC can produce over the next four quarters has already been purchased by these companies.
The overly high demand appears to solidify the continued growth of Nvidia’s already formidable footprint in the AI processor market, even with competition from rivals such as AMD, Intel, and various smaller cloud-service providers.
“Our view continues to be that Nvidia is likely to gain share of AI processors in 2025, as the biggest users of custom silicon are seeing very steep ramps with Nvidia solutions next year,” Moore said in a client note, according to TechSpot. “Everything that we heard this week reinforced that.”
The news comes months after Gartner predicted that AI chip revenue will skyrocket in 2024.
Nvidia introduced the Blackwell GPU platform in March, hailing its ability to “unlock breakthroughs in data processing, engineering simulation, electronic design automation, computer-aided drug design, quantum computing, and generative AI—all emerging opportunities for Nvidia.”
The Blackwell includes the B200 Tensor Core GPU and GB200 Grace “super chip.” These processors are designed to handle the demanding workloads of large language model (LLM) inference while significantly reducing energy consumption, a growing concern in the industry. At the time of its release, Nvidia said the Blackwell architecture adds capabilities at the chip level to leverage AI-based preventative maintenance to run diagnostics and forecast reliability issues.
“This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs,’’ the company said in March.
SEE: AMD Reveals Fleet of Chips for Heavy AI Workloads
Nvidia resolved packaging issues it initially faced with the B100 and B200 GPUs, which allowed the company and TSMC to ramp up production. Both B100 and B200 use TSMC’s CoWoS-L packaging, and there are still questions about whether the world’s largest chip contract maker has enough CoWoS-L capacity.
It also remains to be seen whether memory makers can supply enough HBM3E memory for leading-edge GPUs like Blackwell as the demand for AI GPUs is skyrocketing. In particular, Nvidia has not yet qualified Samsung’s HBM3E memory for its Blackwell GPUs, another factor influencing supply.
Nvidia acknowledged in August that its Blackwell-based products were experiencing low yields and would necessitate a re-spin of some layers of the B200 processor to improve production efficiency. Despite these challenges, Nvidia appeared confident in its ability to boost production of Blackwell in the fourth quarter of 2024. It expects to ship several billion dollars’ worth of Blackwell GPUs in the last quarter of this year.
The Blackwell architecture may be the most complex architecture ever built for AI. It exceeds the demands of today’s models and prepares the infrastructure, engineering, and platform that organizations will need to handle the parameters and performance of an LLM.
Nvidia is not only working on computing processing to meet the demands of these new models — it is also concentrating on the three biggest barriers limiting AI today: energy consumption, latency, and precision accuracy. The Blackwell architecture is designed to deliver unprecedented performance with better energy efficiency, according to the company.
Nvidia reported that data center revenue in the second quarter was $26.3 billion, up 154 percent from the same quarter one year prior.