In Stock
FP4 Inference: 144 petaFLOPS, System Memory: up to 4TB, Power Usage: 14.3kW max
The Foundation for your AI Factory. NVIDIA B200 is a next-generation AI server built for training and deploying the most demanding AI workloads. Powered by 8 Blackwell GPUs and fifth-gen NVLink interconnect, it delivers up to 3x training and 15x inference performance over its predecessor. Ideal for LLMs, recommender systems, and real-time inference applications, B200 is a high-performance system designed for teams scaling production AI infrastructure with confidence.
8x B200
GPU
1,440 GB
GPU
MEMORY
2x
NVIDIA®
NVSWITCH™
72 petaFLOPS
FP8
TRAINING
144 petaFLOPS
FP4
INFERENCE
up to 4TB
SYSTEM
MEMORY
14.3kW max
POWER
USAGE
24 month
WARRANTY
GPU:
8x NVIDIA Blackwell GPUs
GPU Memory:
1,440GB total GPU memory
Performance (Training):
72 petaFLOPS
Performance (Inference):
144 petaFLOPS
Power Consumption:
~14.3kW max
CPU:
2 Intel® Xeon® Platinum 8570 Processors
CPU Cores:
112 Cores total, 2.1 GHz (Base)
CPU Max Boost:
4 GHz
System Memory:
Up to 4TB
OS Storage:
2x 1.9TB NVMe M.2
Internal Storage:
8x 3.84TB NVMe U.2
Out of Stock
FP4 Inference: 144 petaFLOPS, System Memory: up to 4TB, Power Usage: 14.3kW max
*Estimated prices, not final