Oracle now runs thousands of Nvidia Blackwell GPUs on its Oracle Cloud Credit: NicoElNino / Shutterstock Oracle has deployed thousands of Nvidia GPUs across its Oracle Cloud Infrastructure cloud service to be used develop and run next-generation reasoning models and AI agents. This is the first wave of liquid-cooled NVIDIA GB200 NVL72 racks in OCI data centers, involving thousands of Nvidia Grace CPUs and Blackwell GPUs. The Nvidia GB200 NVL72 is a supercomputer made up of 36 Arm-based Nvidia Grace CPUs, each one paired with two Blackwell GPUs and connected via NVLink. Each GB200 NVL72 offers more than one exaflop of training performance. [ Related: More Nvidia news and insights ] Oracle aims to eventually build a cluster of more than 100,000 Blackwell GPUs which will form one of its “OCI Superclusters.” In addition to the hardware, the two companies provide a full stack of software and database integrations. Oracle has previously built an OCI Supercluster with 65,536 Nvidia H200 GPUs using the older Hopper GPU technology and no CPU that offers up to 260 exaflops of peak FP8 performance. According to the blog post announcing the availability, the Blackwell GPUs are available via Oracle’s public, government, and sovereign clouds, as well as in customer-owned data centers through its OCI Dedicated Region and Alloy offerings. [ Related: Nvidia GTC 2025: News and insights ] Oracle joins a growing list of cloud providers that have made the GB200 NVL72 system available, including Google, CoreWeave and Lambda. In addition, Microsoft offers the GB200 GPUs, though they are not deployed as an NVL72 machine. The NVL72 Is unique because it makes the many CPUs and GPUs appear to the system software as a single image or piece of silicon to the software with a shared memory space, rather than 72 individual GPUs each with its own memory space. Clusters often have trouble scaling beyond eight GPUs but the NVL72 achieves its scale through the fifth generation of Nvidia’s NVLink, providing an exceptionally high GPU-to-GPU interconnect bandwidth of up to 130 TB/s, according to Nvidia. This enables fast data sharing and synchronization across all GPUs, which is needed for training large AI models. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe