Microsoft explains how thousands of Nvidia GPUs created ChatGPT | digital trends

ChatGPT rose to viral fame in the last six months, but it didn’t come out of nowhere. According to a blog post published by Microsoft on Monday, OpenAI, the company behind ChatGPT, approached Microsoft to build an AI infrastructure on thousands of Nvidia GPUs more than five years ago.

The OpenAI and Microsoft partnership has gotten a lot of attention recently, especially after Microsoft made a $10 billion investment in the research group behind tools like ChatGPT and DALL-E 2. However, the partnership started a long time ago, according to Microsoft. Since then, Bloomberg reports that Microsoft has spent “several hundred million dollars” developing the infrastructure to support ChatGPT and projects like Bing Chat.

Much of that money went to Nvidia, which is now at the forefront of the computing hardware needed to train AI models. Instead of gaming GPUs like you’d find on a list of the best graphics cards, Microsoft turned to Nvidia’s enterprise-grade GPUs like the A100 and H100.

However, it’s not as simple as putting graphics cards together and training a language model. As Nidhi Chappell, Microsoft’s product manager for Azure, explains: “This isn’t something where you just buy a bunch of GPUs, plug them in, and they start working together. There is a lot of optimization at the system level to get the best performance, and that comes with a lot of experience over many generations.”

With the infrastructure in place, Microsoft is now opening up its hardware to others. The company announced Monday in a separate blog post that it would offer Nvidia H100 systems “on demand in sizes ranging from eight to thousands of Nvidia H100 GPUs,” delivered over Microsoft’s Azure network.

ChatGPT’s popularity has skyrocketed at Nvidia, which has been investing in AI across hardware and software for several years. AMD, Nvidia’s main competitor for gaming graphics cards, has been trying to push its way into the space with accelerators like the Instinct MI300.

According to Greg Brockman, president and co-founder of OpenAI, ChatGPT training would not have been possible without the power provided by Microsoft: “Co-designing supercomputers with Azure has been crucial in scaling our demanding AI training needs, making our research and alignment work in systems like ChatGPT is possible”.

Nvidia is expected to reveal more about future AI products during the GPU Technology Conference (GTC). with the main presentation beginning on March 21. Microsoft is expanding its AI roadmap later this week, with a presentation focused on the future of AI in the workplace scheduled for March 16.

Editors’ Recommendations








Source link

James D. Brown
James D. Brown
Articles: 9390