Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
No company has moved from an equity value of $1tn to $2tn as quickly as Nvidia. The chip designer doubled its market cap in less than nine months. The boom in generative artificial intelligence means demand for Nvidia chips still exceeds supply. But hitting the $3tn milestone is proving more difficult.
To some investors, AI hype has inflated valuations beyond reason. Michael Burry, whose short against the housing market was made famous in The Big Short, bet against semiconductor stocks last year. Cathie Wood’s Ark Invest dropped Nvidia in early 2023, claiming the market was pricing in too much optimism.
Yet compare Nvidia’s position with the company’s own market history and its valuation does not look stretched. The stock trades at 35 times expected earnings, down from 55 times at the start of 2022. Considering the size of its potential market and the pricing power it wields, Nvidia has a shot at becoming the most valuable stock in the US.
Those concerned by Nvidia’s valuation point to the advances made by Google’s TPU, AMD’s MI300X and other would-be rivals. But Nvidia has two advantages: a head start and a closed ecosystem. It created its graphics processing units (GPUs) as processors for computer graphics. They turned out to be perfect for the huge calculations needed to train large language models for generative AI. Rivals are constrained by the lack of semiconductor fabrication plants needed to assemble new high-powered chips. These take years and tens of billions of dollars to build.
Nvidia also offers a one-stop shop for customers via its CUDA platform, which enables them to use programmes that run on Nvidia chips. One comparison might be Apple, which creates software that runs on its own hardware.
Combined, these help Nvidia to dominate the AI chip market and set its own prices. The company’s gross margins have jumped from 65 per cent to nearly 73 per cent in the space of two years. Compare that with Intel, where margins are 41 per cent. Or AMD’s 51 per cent.
Sales could continue to jump. Training LLMs requires large numbers of GPUs. So does the next stage of generative AI use: inference. The market for processing generative AI requests like videos, images and text could be far larger than the market for training the models. Nvidia’s latest chip, the Blackwell B200 GPU, processes these 30 times more quickly than its last chip. That will keep the company ahead of rivals as the focus of generative AI shifts.
Read the full article here