NVIDIA H200 Tensor Core GPU

Contact for price*

In Stock

FP8 Performance: 4 PetaFLOPS, LLM Inference: 2x, HPC Performance: 110x

ARCHITECTURE

Hopper

CUDA CORES

16896

TENSOR CORES

989

GPU MEMORY

141GB of HBM3e

MEMORY BANDWIDTH

4.8TB/s

H200
nvidia

Overview

The NVIDIA H200 is a cutting-edge GPU designed to excel in AI, machine learning, and high-performance computing (HPC) environments. Built on the advanced Hopper architecture, it features improved computational power and memory capabilities, making it well-suited for tasks like generative AI, large language models (LLMs), and complex scientific simulations. The H200 is the first to use HBM3e memory, which provides faster data throughput and larger capacity, enabling the processing of more substantial and complex workloads. It is engineered to meet the growing demands of data-intensive applications across industries.

nvidia

GH200

GRAPHICS

PROCESSOR

141GB

MEMORY

SIZE

HBM3e

MEMORY

TYPE

4.8TB/s

MEMORY

BANDWIDTH

4 PetaFLOPS

FP8

PERFORMANCE

2x

LLM

INFERENCE

110x

HPC

PERFORMANCE

24 Months

WARRANTY

Specifications

H200 SXM

FP64:

34 TFLOPS

FP64 Tensor Core:

67 TFLOPS

FP32:

67 TFLOPS

TF32 Tensor Core:

989 TFLOPS

BFLOAT16 Tensor Core:

1,979 TFLOPS

FP16 Tensor Core:

1,979 TFLOPS

FP8 Tensor Core:

3,958 TFLOPS

INT8 Tensor Core:

3,958 TFLOPS

GPU Memory:

141GB

GPU Memory Bandwidth:

4.8TB/s

Decoders:

7 NVDEC - 7 JPEG

Confidential Computing:

Supported

Max Thermal Design Power (TDP):

Up to 700W (configurable)

Multi-Instance GPUs:

Up to 7 MIGs @16.5GB each

Form Factor:

SXM

Interconnect:

NVIDIA NVLink™: 900GB/s - PCIe Gen5: 128GB/s

Server Options:

NVIDIA HGX™ H200 partner and NVIDIA- Certified Systems™ with 4 or 8 GPUs

NVIDIA AI Enterprise:

Add-on

H200 NVL

FP64:

34 TFLOPS

FP64 Tensor Core:

67 TFLOPS

FP32:

67 TFLOPS

TF32 Tensor Core:

989 TFLOPS

BFLOAT16 Tensor Core:

1,979 TFLOPS

FP16 Tensor Core:

1,979 TFLOPS

FP8 Tensor Core:

3,958 TFLOPS

INT8 Tensor Core:

3,958 TFLOPS

GPU Memory:

141GB

GPU Memory Bandwidth:

4.8TB/s

Decoders:

7 NVDEC 7 JPEG

Confidential Computing:

Supported

Max Thermal Design Power (TDP):

Up to 600W (configurable)

Multi-Instance GPUs:

Up to 7 MIGs @16.5GB each

Form Factor:

PCIe

Interconnect:

2- or 4-way NVIDIA NVLink bridge: 900GB/s - PCIe Gen5: 128GB/s

Server Options:

NVIDIA MGX™ H200 NVL partner and NVIDIA-Certified Systems with up to 8 GPUs

NVIDIA AI Enterprise:

Included

Interested in buying this GPU?

NVIDIA H200 Tensor Core GPU

Contact for price*

In Stock

FP8 Performance: 4 PetaFLOPS, LLM Inference: 2x, HPC Performance: 110x

H200
nvidia

*Estimated prices, not final

Similar GPUs