Categories: Technology

“The world’s most powerful chip” — Nvidia says its new Blackwell is set to power the next generation of AI

The next generation of AI will be powered by Nvidia hardware, the company has declared after it revealed its next generation of GPUs.

Company CEO Jensen Huang took the wraps off the new Blackwell chips at Nvidia GTC 2024 today, promising a major step forward in terms of AI power and efficiency.

The first Blackwell “superchip”, the GB200, is set to ship later this year, with the ability to scale up from a single rack all the way to an entire data center, as Nvidia looks to push on with its leadership in the AI race.

Nvidia Blackwell

Representing a significant step forward for the company’s hardware from its predecessor, Hopper, Huang noted that Blackwell contains 208 billion transistors (up from 80 billion in Hopper) across its two GPU dies, which are connected by  10 TB/second chip-to-chip link into a single, unified GPU. 

This makes Blackwell up to 30x faster than Hopper when it comes to AI inference tasks, offering up to 20 petaflops of FP4 power, far ahead of anything else on the market today.

(Image credit: Future / Mike Moore)

During his keynote, Huang highlighted not only the huge jump in power between Blackwell and Hopper – but also the major difference in size.

“Blackwell’s not a chip, it’s the name of a platform,” Huang said. “Hopper is fantastic, but we need bigger GPUs.”

Despite this, Nvidia says Blackwell can reduce cost and energy consumption by up to 25x, giving the example of training a 1.8 trillion parameter model – which would previously have taken 8,000 Hopper GPUs and 15 megawatts of power – but can now be done by just 2,000 Blackwell GPUs consuming just four megawatts.

The new GB200 brings together two Nvidia B200 Tensor Core GPUs and a Grace CPU to create what the company simply calls, “a massive superchip” able to drive forward AI development, providing 7x the performance and four times the training speed of an H10O-powered system.

The company also revealed a next-gen NVLink network switch chip with 50 billion transistors, which will mean 576 GPUs are able to talk to each other, creating 1.8 terabytes per second of bidirectional bandwidth.

Nvidia has already signed up a host of major partners to build Blackwell-powered systems, with AWS, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure already on board alongside a host of big industry names.

More from TechRadar Pro

https://www.techradar.com/rss

Mike Moore

Mike Moore

Share
Published by
Mike Moore

Recent Posts

Darwin Nunez ‘dreaming of joining Barcelona this summer’ as LaLiga giants target Liverpool star to replace Lewandowski

LIVERPOOL striker Darwin Nunez is reportedly "dreaming" of a summer move to Barcelona.Nunez, 24, joined…

46 mins ago

Max may raise streaming prices again, following Netflix and Disney Plus’ lead

Streaming service Max is set to raise its subscription costs in an attempt to improve…

57 mins ago

Microsoft is investing billions into another major US AI data center — and its location is a slap in the face to Apple

Mount Pleasant, Wisconsin, once set to become the future home of Foxconn’s LCD manufacturing plant,…

58 mins ago

Apple’s AirPods Max are $100 off and close to a record low

Most headphones don't last forever, especially wireless ones as the battery life will inevitably dwindle.…

1 hour ago

I’m tired of dumb makeup hacks – my mascara trick actually works and cuts down on clumps, another tip gets fuller lips

ONE beauty lover set out to prove that not all makeup hacks are created equal.She…

1 hour ago