Accelerate Training &INference

NVIDIA HGX H200 GPU Servers

Unleash Hopper architecture in your data center with NVIDIA H200 Tensor Core GPUs. Perfect for large-scale AI training, high-throughput inference, and advanced HPC workloads.

NVIDIA HGX H200 Server Options

Blackwell Cloud offers customizable server configurations, featuring up to 8x NVIDIA HGX™ H200 GPUs, manufactured by various OEMs.

Dell H200 System

NVIDIA HGX H200 6U
Dell PowerEdge XE9680

GPU
8x H200 141GB SXM5
CPU
2x Intel Xeon 48C processors
Form Factor
6U / air cooled
Manufacturer
Dell
Starting at
$265,000 USD
NVIDIA HGX H200 - Supermicro

NVIDIA HGX H200 8U 
Supermicro SYS-821GE-TNHR

GPU
8x H200 141GB SXM5
CPU
2x Intel Xeon 48C processors
Form Factor
8U / air cooled
Manufacturer
Supermicro
Starting at
$275,000 USD

NVIDIA H200 SXM5 Tensor Core GPU

The NVIDIA H200 was the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s). That’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth.

Ideal Workloads for HGX H200 Servers

HPC

High Performance Computing

Unprecedented computational power for scientific research and simulations with large datasets and intricate calculations.

DL Training

Deep Learning Training

Enabling faster and more accurate deep learning tasks for rapid advancements in artificial intelligence.

Language Processing

Language Processing

Empowering applications for tasks like sentiment analysis and language translation with remarkable precision.

Conversational AI

Conversational AI

Enhancing the processing speed and efficiency of chatbots and virtual assistants for more engaging user experiences.

NVIDIA HGX H200 Specifications

GPU Architecture NVIDIA Hopper Architecture
FP64 TFLOPS 34
FP64 Tensor Core TFLOPS 67
FP32 TFLOPS 67
TF32 Tensor Core TFLOPS 989
BFLOAT16 Tensor Core TFLOPS 1,979
FP16 Tensor Core 1,979
FP8 Tensor Core 3,958
INT8 Tensor Core 3,958 TOPS
GPU memory 141GB
GPU memory bandwidth 4.8TB/s
Decoders 7 NVDEC | 7 JPEG
Max thermal design power (TDP) Up to 700W (configurable)
Multi-Instance GPUs Up to 7 MIGS @ 16.5GB each
Form factor SXM
NVLink Support NVLink: 900GB/s PCIe Gen5: 128GB/s