Hewlett Packard Enterprise (HPE) has announced a supercomputing solution for GenAI designed to allow large enterprises, research institutions and government organizations to accelerate the training and tuning of AI models using private data sets. The supercomputing “turnkey” solution for GenAI plans on accelerating training speeds by two to three times and will be generally available in December 2023 on a global scale.
Powered by NVIDIA Grace Hopper GH200 Superchips, this solution is integrated with HPE Cray supercomputing technology based on the same architecture used in HPE’s Frontier supercomputer. Key components of this supercomputing solution for GenAI include new software tools to build AI applications, customize pre-built models and harness the ability to develop and modify code.
With liquid-cooled supercomputers, accelerated compute, networking, storage and services, NVIDIA and HPE aim to offer organizations the scale and performance required for big AI workloads, such as LLM and deep learning recommendation model (DLRM) training to unlock AI faster.
Justin Hotard, executive vice president and general manager, HPC, AI and Labs, HPE, said: “The world’s leading companies and research centers are training and tuning AI models to drive innovation and unlock breakthroughs in research, but to do so effectively and efficiently, they need purpose-built solutions.
“To support generative AI, organizations need to leverage solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer to support AI model training. We are thrilled to expand our collaboration with NVIDIA to offer a turnkey AI-native solution that will help our customers significantly accelerate AI model training and outcomes.”
Ian Buck, vice president of Hyperscale and HPC at NVIDIA, said: “Generative AI is transforming every industrial and scientific endeavor, NVIDIA’s collaboration with HPE on this turnkey AI training and simulation solution, powered by NVIDIA GH200 Grace Hopper Superchips, will provide customers with the performance needed to achieve breakthroughs in their generative AI initiatives.”