Nvidia and generative AI – “During a gold rush, sell shovels”. 

“We are at the iPhone moment for AI” said Nvidia CEO, Jensen Huang during his keynote speech at Nvidia’s GPU Technology Conference (GTC) this morning. Like eager children waiting for their presents on Christmas morning, developers watched with bated breath as Huang unveiled new toys for everyone.

A quick summary of the hardware/software relevant to generative AI introduced in his keynote:

Hardware (Four new inference AI chips + new Cloud solution)

H100 NVL For deployment of Large Language Models (LLMs) at scale, 12x faster GPT3 inference performance compared to previous generation
L40 For image generation, 7x inference performance for Stable Diffusion when compared to previous generation
L4 For AI video generation, also doubles as general GPU for other workloads
Grace Hopper For recommendation models (e.g. search engines, predicting review/preference of users), graph neural networks & vector databases
DGX Cloud solutions On demand access to DGX supercomputers starting at US$ 37k per node. Meant to make training of AI models more accessible



AI Foundations “The industry needs a foundry, a TSMC for custom large language models.”

Provides a framework for companies to build upon and develop AI models using their own proprietary data. Integrated with DGX Cloud, with NeMo (specifically for LLMs), NVIDIA Picasso (for image, video and 3d generation) and NVIDIA BioNeMo (drug research)



Partnered with Microsoft Azure, Google Cloud Provider, Oracle Cloud Infrastructure to power DGX Cloud.

Also partnered with Adobe, Getty Images and Shutterstock to train AI models with fully licensed data, thus avoiding the big headache of legal issues that other companies are facing right now.

Essentially, there is a specialised GPU for every part of generative AI, from training to deploying. I was not kidding when I said everyone got something to play with.

As we can see, Nvidia takes on a platform approach and innovates consistently, bringing cutting-edge technology to the table ever so frequently and leaving their competitors in the dust. There were concerns about the waning demand for GPUs after Ethereum switched to Proof-of-Stake from Proof-of-Work and the general crypto crash in 2022. However, the AI arms race has revitalised demand for GPUs as tech companies rush to create their own models.


“During a gold rush, sell shovels”. Currently, Nvidia seems to have this nailed down and is untouchable by competitors. A conservative estimate puts Nvidia at 80% of AI processors market share.


These are just some of the insights covered in our new report – “The future, by ChatGPT. Download your copy here. Momentum Academy will also be doing a zoom briefing on this report on Thursday, 6 April, 3PM – 4PM SGT. You can register for the briefing here.

If you would like to have this sharing and other insights for your leadership team, you are welcome to contact Momentum Academy ([email protected]).



Thanks for reading The Low Down (TLD), the blog by the team at Momentum Works. Got a different perspective or have a burning opinion to share? Let us know at [email protected].