GPU Marketplace

Search, filter, and ask the AI to recommend a plan based on your needs.

No providers match your filters. Try clearing search or changing category/billing.
AWS logo
AWS
GPU Cloud
EC2 GPU Instances · Enterprise GPU compute for training and inference.

g5 (A10G 24GB x1)
GPU: A10G · Count: 1 · VRAM: 24GB
  • Good value for inference + light training
varies
/hour
g6 (L4 24GB x1)
GPU: L4 · Count: 1 · VRAM: 24GB
  • Modern inference GPU (excellent perf/$)
varies
/hour
p4d (A100 40GB x8)
GPU: A100 · Count: 8 · VRAM: 40GB
  • High-throughput training (multi-GPU)
varies
/hour
p5 (H100 80GB x8)
GPU: H100 · Count: 8 · VRAM: 80GB
  • Frontier training (H100 class)
varies
/hour
Google Cloud logo
Google Cloud
GPU Cloud
Compute Engine GPUs · A2/A3 GPU families for training and inference.

L4 24GB x1
GPU: L4 · Count: 1 · VRAM: 24GB
  • Great for inference and small fine-tunes
varies
/hour
A100 40GB x1
GPU: A100 · Count: 1 · VRAM: 40GB
  • Training & larger fine-tunes
varies
/hour
A100 80GB x1
GPU: A100 · Count: 1 · VRAM: 80GB
  • More VRAM for bigger batch / context
varies
/hour
H100 80GB x1
GPU: H100 · Count: 1 · VRAM: 80GB
  • A3/H100 for top-end training
varies
/hour
Microsoft Azure logo
Microsoft Azure
GPU Cloud
NV/NC/ND GPU VMs · Enterprise GPU instances across inference and training tiers.

NV (Inference tier — varies)
GPU: Inference GPU (varies) · Count: 1 · VRAM: 24GB
  • Inference-oriented GPU VM families
varies
/hour
NC (Compute tier — varies)
GPU: Training GPU (varies) · Count: 1 · VRAM: 48GB
  • Training-capable GPU VM families
varies
/hour
ND (A100 class — varies)
GPU: A100-class · Count: 1 · VRAM: 80GB
  • A100-class training VMs (ND series)
varies
/hour
ND (H100 class — varies)
GPU: H100-class · Count: 1 · VRAM: 80GB
  • H100-class training (region-dependent)
varies
/hour
Lambda Labs logo
Lambda Labs
GPU Cloud
Lambda GPU Cloud · Popular for cost-effective training and fine-tuning.

A10 24GB x1
GPU: A10 · Count: 1 · VRAM: 24GB
  • Budget inference / light workloads
varies
/hour
L4 24GB x1
GPU: L4 · Count: 1 · VRAM: 24GB
  • Efficient inference
varies
/hour
A100 40GB x1
GPU: A100 · Count: 1 · VRAM: 40GB
  • Training & fine-tuning
varies
/hour
A100 80GB x1
GPU: A100 · Count: 1 · VRAM: 80GB
  • More VRAM for larger jobs
varies
/hour
H100 80GB x1
GPU: H100 · Count: 1 · VRAM: 80GB
  • Top-end training (availability varies)
varies
/hour
RunPod logo
RunPod
GPU Cloud
GPU Pods · On-demand and spot GPU pods.

RTX 4090 24GB x1
GPU: RTX 4090 · Count: 1 · VRAM: 24GB
  • Great perf/$ for many workloads
varies
/hour
L40S 48GB x1
GPU: L40S · Count: 1 · VRAM: 48GB
  • Strong inference + medium training
varies
/hour
A100 40GB x1
GPU: A100 · Count: 1 · VRAM: 40GB
  • Training / fine-tuning
varies
/hour
A100 80GB x1
GPU: A100 · Count: 1 · VRAM: 80GB
  • More VRAM for big batches
varies
/hour
H100 80GB x1
GPU: H100 · Count: 1 · VRAM: 80GB
  • Frontier training (availability varies)
varies
/hour
Vast.ai logo
Vast.ai
GPU Cloud
GPU Marketplace · Marketplace pricing across many GPU types.

RTX 3090 24GB x1
GPU: RTX 3090 · Count: 1 · VRAM: 24GB
  • Marketplace rate varies by host
varies
/hour
RTX 4090 24GB x1
GPU: RTX 4090 · Count: 1 · VRAM: 24GB
  • Often strong value
varies
/hour
A100 40GB x1
GPU: A100 · Count: 1 · VRAM: 40GB
  • Training marketplace nodes
varies
/hour
A100 80GB x1
GPU: A100 · Count: 1 · VRAM: 80GB
  • Higher VRAM nodes (availability varies)
varies
/hour
H100 80GB x1
GPU: H100 · Count: 1 · VRAM: 80GB
  • Rare but possible on marketplace
varies
/hour
CoreWeave logo
CoreWeave
GPU Cloud
GPU Cloud · Enterprise GPU cloud for large-scale AI.

L40S 48GB x1
GPU: L40S · Count: 1 · VRAM: 48GB
  • Strong inference / mid-tier training
varies
/hour
A100 80GB x1
GPU: A100 · Count: 1 · VRAM: 80GB
  • High VRAM training tier
varies
/hour
H100 80GB x1
GPU: H100 · Count: 1 · VRAM: 80GB
  • Frontier training
varies
/hour
Paperspace logo
Paperspace
GPU Cloud
Gradient · Simple GPU machines for developers.

T4 16GB x1
GPU: T4 · Count: 1 · VRAM: 16GB
  • Budget inference / prototyping
varies
/hour
A4000 16GB x1
GPU: RTX A4000 · Count: 1 · VRAM: 16GB
  • General-purpose GPU option
varies
/hour
A5000 24GB x1
GPU: RTX A5000 · Count: 1 · VRAM: 24GB
  • More VRAM for bigger models
varies
/hour
A100 80GB x1
GPU: A100 · Count: 1 · VRAM: 80GB
  • Training tier (availability varies)
varies
/hour
OVHcloud logo
OVHcloud
GPU Cloud
Bare Metal GPU · European GPU hosting options.

RTX 4090 24GB (varies)
GPU: RTX 4090 · Count: 1 · VRAM: 24GB
  • Offering varies by region/stock
varies
/hour
L40S 48GB (varies)
GPU: L40S · Count: 1 · VRAM: 48GB
  • Offering varies by region/stock
varies
/hour
A100 80GB (varies)
GPU: A100 · Count: 1 · VRAM: 80GB
  • Offering varies by region/stock
varies
/hour
H100 80GB (varies)
GPU: H100 · Count: 1 · VRAM: 80GB
  • Offering varies by region/stock
varies
/hour
CloudMart AI
What are you building?

Include your workload, GPU type preference, VRAM needs, budget, and expected runtime.

More plans being added soon! - Bot powered by xAI