NOW AVAILABLE

NVIDIA HGX B300 NVL16 GPU Servers

Build AI factories that train faster and serve smarter with the next generation of NVIDIA HGX™ systems, powered by Blackwell Ultra accelerators and fifth generation NVLink technology.

Why Choose NVIDIA HGX B300 NVL16

The HGX B300 NVL16 platform is a major leap in AI infrastructure, engineered for trillion-parameter model training, large-scale inference, and reasoning workloads.

16x
Compute Dies
2.3TB
Total HBM3e Memory
800G
ConnectX-8 NICs x8
1.8TB/s
GPU-to-GPU Bandwidth

Key Benefits of HGX B300 NVL16 GPU Servers

Train trillion-parameter models faster

with Blackwell Ultra GPUs and NVLink NVL16 fabric that delivers record intra-node bandwidth.

Serve AI at scale

with architecture optimized for reasoning, inference, and high-throughput deployment in production clusters.

Future-proof your data center

with NVIDIA’s latest HGX™ reference design, ready for InfiniBand or Ethernet fabrics and scalable to multi-rack AI factories.

Ideal Workloads for HGX B300 NVL16 Servers

Media Processing

LLM Training at Scale

Fifth gen NVLink fabric removes bottlenecks for multi-trillion parameter pre-training and continual training, delivering faster time-to-accuracy for models.

DL Training

Inference and RAG

Blackwell Ultra GPUs enable low-latency, large-scale inference, powering retrieval-augmented generation and production AI deployments.

Language Processing

Agentic AI and Reasoning

Optimized for reasoning-heavy pipelines, the HGX™ B300 NVL16 supports autonomous agents and advanced decision-making models at scale.

Explore Configurations

NVIDIA HGX B300 NVL16 Server Options

Blackwell Cloud offers customizable server configurations, featuring up to 8x NVIDIA HGX™ B300 NVL16 GPUs, manufactured by various OEMs.

SuperServer SYS-822GS-NB3RT

NVIDIA HGX B300 NVL16 8U
Supermicro SYS-822GS-NB3RT

GPU
8x B300 288GB SXM
CPU
2x Intel Xeon processors
Form Factor
8U / air cooled
Manufacturer
Supermicro
Starting at
$435,000 USD

NVIDIA HGX B300 NVL16 Specifications

Form Factor 8x NVIDIA Blackwell Ultra SXM
FP4 Tensor Core 144 PFLOPS | 105 PFLOPS
FP8/FP6 Tensor Core 72 PFLOPS
INT8 Tensor Core 2 POPS
FP16/BF16 Tensor Core 36 PFLOPS
TF32 Tensor Core 18 PFLOPS
FP32 600 TFLOPS
FP64/FP64 Tensor Core 10 TFLOPS
Total Memory Up to 2.3 TB
NVLink Fifth generation
NVIDIA NVSwitch™ NVLink 5 Switch
NVSwitch GPU-to-GPU Bandwidth 1.8 TB/s
Total NVLink Bandwidth 14.4 TB/s
Networking Bandwidth 1.6 TB/s
Attention Performance 2X

Explore Our High-Performance NVIDIA GPU Servers

NVIDIA HGX B300 NVL16 Baseboard

NVIDIA HGX B300 NVL16 Servers

Build AI factories that train faster and serve smarter with the next generation of NVIDIA HGX™ systems, powered by Blackwell Ultra accelerators and fifth generation NVLink technology.

NVIDIA HGX B200 Baseboard

NVIDIA HGX B200 Servers

Leverage the power of NVIDIA Blackwell GPUs for accelerated AI and HPC workloads, offering 15x faster inference and 12x lower energy consumption.

NVIDIA RTX PRO 6000 Server Edition GPU

NVIDIA RTX PRO 6000 Servers

Unleash Blackwell architecture in your data center with RTX PRO 6000 Server Edition. Perfect for demanding AI visualization, digital twins, and 3D content creation workloads.

NVIDIA HGX H200 Baseboard

NVIDIA HGX H200 Servers

Experience enhanced memory capacity and bandwidth over H100, ideal for large-scale AI model training.

NVIDIA HGX H100 Baseboard

NVIDIA HGX H100 Servers

Optimize your infrastructure with NVIDIA 's best-selling enterprise GPU, delivering unparalleled performance and scalability.