Nvidia shares soared near a $1 trillion market cap in after-hours trading on Wednesday after it reported a shockingly strong outlook for the future and CEO Jensen Huang said the company would have a “huge record year.”
Sales are rising due to growing demand for the graphics processing units (GPUs) that Nvidia makes, which power AI applications such as those from Google, Microsoft and OpenAI.
Demand for AI chips in data centers propelled Nvidia to $11 billion in sales in the current quarter, beating analysts’ estimates of $7.15 billion.
“The flashpoint was generative AI,” Huang said in an interview with CNBC. “We know CPU scaling has slowed, we know accelerated computing is the way forward, and then the killer app came along.”
Nvidia believes it is making a sea change in the way computers are built, which could lead to even more growth — data center parts could even become a $1 trillion market, Huang says.
Historically, the most important part in a computer or server was the central processing unit or CPU. This market is dominated by Intelwith AMD as his main rival.
With the advent of AI applications that require a lot of computing power, the graphics processing unit (GPU) takes center stage, and the most advanced systems use up to eight GPUs to one CPU. Nvidia currently dominates the market for AI GPUs.
“The data center of the past, which was largely a file retrieval processor, will be a data generator in the future,” Huang said. “Instead of mining data, you’re going to mine some data, but you have to generate most of the data using AI.”
“So instead of millions of CPUs, you’re going to have a lot less CPUs, but they’re going to be connected to millions of GPUs,” Huang continued.
For example, Nvidia’s own DGX systems, which are essentially an AI training computer in a box, use eight of Nvidia’s high-end H100 GPUs and just two CPUs.
Google’s A3 supercomputer combines eight H100 GPUs along with one high-end Xeon processor made by Intel.
That’s one reason why Nvidia’s data center business grew 14% in the first calendar quarter, compared to flat growth for AMD’s data center unit and a 39% decline in Intel’s AI and data center business unit .
Also, Nvidia GPUs tend to be more expensive than many CPUs. Intel’s latest generation of Xeon processors can cost up to $17,000 at list price. An Nvidia H100 can sell for $40,000 on the aftermarket.
Nvidia will face increasing competition as the AI chip market heats up. AMD has a competitive GPU business, especially in gaming, and Intel also has its own line of GPUs. Startups are creating new types of chips specifically for AI, as are companies focused on mobile devices Qualcomm and Apple continue to push the technology so that one day it can run in your pocket rather than a giant server farm. Google and Amazon are designing their own AI chips.
But Nvidia’s high-end GPUs remain the chip of choice for current companies building apps like ChatGPT, which are expensive to train by processing terabytes of data and expensive to run later in a process called “inference,” which uses the model to generate text, images or make predictions.
Analysts say Nvidia remains in the lead for AI chips because of its proprietary software that makes it easy to use all of the GPU’s hardware features for AI applications.
Huang said on Wednesday that the company’s software will not be easy to copy.
“You have to design all the software and all the libraries and all the algorithms, integrate them and optimize the frameworks and optimize it for the architecture, not just on a single chip, but for the architecture of an entire data center,” Huang said on a call with analysts .