New generation of chips will drive the AI wave

0 0

Unlock the Editor’s Digest for free

Some of you will remember the days when downloading a movie meant a wait of over an hour. Today, the latest chips can send more than 160 full-HD movies over the internet in under one second.

The artificial intelligence chip industry has developed a newfound appreciation for high bandwidth memory, or HBM, the technology behind such lightning-fast data transfers. Analysts had once deemed it unlikely to ever become commercially viable when it was first launched in 2013.

But US chip designers Nvidia and AMD are breathing new life into the advanced technology that has now become a critical component in all AI chips. The most pressing issue for AI chipmakers is the ever-growing demand for more processing power and bandwidth requirements as companies rush to expand data centres and develop AI systems such as large language models.

Meanwhile, data-heavy generative AI applications are pushing the performance limits of what conventional memory chips can offer. Faster data processing speeds and transfer rates require a larger number of chips that take up more physical space and consume more power. 

Until now, the traditional set-up has been the placing of chips side by side on a flat surface which are then connected with wiring and fuses. More chips mean slower communication through them and higher power consumption. 

HBM upends decades of chip industry convention by stacking multiple layers of chips on top of each other and uses cutting edge components, including a tiny circuit board thinner than a piece of paper, to pack chips much closer together in a three-dimensional shape. 

This improvement is critical to AI chipmakers as proximity between chips uses less energy — HBM uses about three-quarters less than traditional structures. Research has also shown HBM also provides as much as five times higher bandwidth and takes up less space — less than half the size of current offerings. 

While the technology is advanced, it is not new. AMD and South Korean chipmaker SK Hynix started working on HBM 15 years ago, when high performance chips were mostly used in the gaming sector. 

Critics at the time were sceptical that the performance boost would be worth the added costs. HBM uses more components, many of which are intricate and difficult to manufacture, compared with traditional chips. By 2015, two years into the launch, analysts expected HBM to be relegated to a tiny niche. Costs seemed too high for mass market use. 

They are still expensive today, costing at least five times more than standard memory chips. The difference now is that the AI chips that they go into fetch a steep price too. And the tech giants now have a much larger budget to spend on advanced chips than gamers did a decade ago. 

For now, just one company, SK Hynix, is able to mass produce the third generation of HBM products, the ones used in the latest AI chips. It has 50 per cent of the global market, while the rest of the market is held by two rivals, according to data from consultancy TrendForce. Samsung and Micron produce older generation HBMs and are set to release their latest versions in the coming months. They stand to gain a windfall from the high margin product alongside rapidly rising demand. In June, TrendForce forecast global demand for HBM would rise 60 per cent this year.

The AI chip war is about to turbocharge that growth even further next year. AMD has just launched a new product with hopes to take on Nvidia. A global shortage coupled with strong demand for a more affordable alternative to Nvidia’s offerings means a lucrative opportunity for rivals able to offer chips with comparable specifications. But shaking Nvidia’s dominance, the key to which lies not just in the physical chip itself but its software ecosystem, which includes popular developer tools and programming models, is another story. Replicating that will take years.

That is why hardware — and the number of HBMs squeezed into each new chip — will now become yet more important for contenders that want to take on Nvidia. Boosting memory capacity through the use of upgraded HBMs is one of the few ways to gain competitiveness in the short term. For example, AMD’s latest MI300 accelerator uses eight HBMs, more than the five or six in Nvidia’s products. 

Chips have historically been a cyclical industry prone to dramatic booms and busts. On its present course, a lasting increase in demand for new products should make future downturns less turbulent than in the past.

[email protected]

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy