RESERVED H200

RESERVED H200

RESERVED H200

RESERVED H200

NVIDIA H200 141GB

The H200 pairs Hopper compute with 141GB of HBM3e memory decisive for KV-cache-heavy inference and memory-bound training. Reserved capacity gives you priority over on-demand traffic during shortage events, important given supply tightness through 2026.

AVAILABLE TERM LENGTH

USED V100 32GB — Indicative Range (Q1 2026)

1MO

3MO

6MO

12MO

24MO

36MO

All term lengths available, but supply is tighter than H100. Longer commits typically secure capacity faster providers prioritize 12-month-plus reservations during allocation cycles.

TECHNICAL SPECIFICATIONS

VRAM

VRAM

141 GB HBM3e

141 GB HBM3e

MEMORY BANDWIDTH

MEMORY BANDWIDTH

4.8 TB/s

4.8 TB/s

FP 16 TENSOR

FP 16 TENSOR

1,979 TFLOPS (sparse)

1,979 TFLOPS (sparse)

FP 8 TENSOR

FP 8 TENSOR

3,958 TFLOPS (sparse)

3,958 TFLOPS (sparse)

TDP

TDP

700W

700W

FORM FACTOR

FORM FACTOR

SXM5

SXM5

INTERCONNECT

INTERCONNECT

NVLink 4.0 / NVSwitch (900 GB/s)

NVLink 4.0 / NVSwitch (900 GB/s)

ARCHITECTURE

ARCHITECTURE

Hopper

Hopper

Partner Network

AGGREGATED ACROSS LEADING NEOCLOUDS

Compute Exchange aggregates reserved capacity from a verified network of leading AI-native cloud providers and hyperscalers. All partners undergo identity, capacity, SLA, and operational verification before quotes surface on the network.

You receive a normalized comparison across providers in a single quote response rather than evaluating each neocloud's contract structure, billing model, and SLA terms in isolation. Compute Exchange stays neutral; we do not operate compute capacity ourselves.

WORKLOAD FIT

RESERVED H200

USE CASES

01

Large-context LLM inference

02

Memory-bound training

03

Retrieval-augmented generation at scale

04

Long-context fine-tuning

WHY RESERVE

RESERVED H200

VS ON-DEMAND

H200 supply is allocation-constrained through 2026, with on-demand availability often spotty. Reserved capacity gives you guaranteed scheduling and locks in provider commitments before the secondary on-demand pricing volatility hits during peak inference cycles.

FREQUENTLY ASKED QUESTIONS

RESERVED H200

RESERVED H200

KEY QUESTIONS

What term lengths are available for H200?

When is H200 reserved worth choosing over H100 SXM5 reserved?

How does H200 reserved availability compare to H100?

Will H200 supply expand through 2026?

READY TO RESERVE?

GET A LIVE

GET A LIVE

RESERVED H200

RESERVED H200

QUOTE

QUOTE

Compute Exchange returns indicative pricing within 24 hours, anchored to your specific quantity, region, and condition. We do not publish active counterparty listings.

COMPUTE

EXCHANGE

The transparent GPU marketplace for AI infrastructure. Built for builders.

ALL SYSTEMS OPERATIONAL

© 2026 COMPUTE EXCHANGE

BUILT FOR THE AI ERA

COMPUTE

EXCHANGE

The transparent GPU marketplace for AI infrastructure. Built for builders.

ALL SYSTEMS OPERATIONAL

© 2026 COMPUTE EXCHANGE

BUILT FOR THE AI ERA

COMPUTE

EXCHANGE

The transparent GPU marketplace for AI infrastructure. Built for builders.

ALL SYSTEMS OPERATIONAL

© 2026 COMPUTE EXCHANGE

BUILT FOR THE AI ERA