Real-time NVIDIA A100 80GB cloud pricing from 15 providers. Cheapest on-demand: $0.99/hr (Shadeform). Updated daily by GridStackHub.
Sorted by cheapest per-GPU hourly rate. Includes on-demand, spot, and reserved pricing where available.
| Provider | Price/hr | Type | Region | VRAM | Updated |
|---|---|---|---|---|---|
|
Vast.aiLowest
A100 80GB (marketplace)
|
$0.89/hr | Spot | Various | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Shadeform
A100 80GB (best price)
|
$0.99/hr | On-Demand | Various | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
TensorDock
A100 SXM 80GB
|
$1.15/hr | On-Demand | US/EU | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
FluidStack
A100 SXM 80GB
|
$1.21/hr | On-Demand | US/EU | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Lambda
1x A100 SXM 80GB
|
$1.29/hr | On-Demand | US | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Jarvis Labs
A100 80GB
|
$1.29/hr | On-Demand | US/EU | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Oblivus Cloud
A100 80GB
|
$1.35/hr | On-Demand | US | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
DataCrunch
A100 SXM4 80GB
|
$1.40/hr | On-Demand | EU (Finland) | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
CoreWeave
A100 SXM 80GB
|
$1.62/hr | On-Demand | US | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
RunPod
A100 SXM 80GB
|
$1.64/hr | On-Demand | US/EU | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Paperspace
A100 80GB SXM
|
$2.07/hr | On-Demand | US | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Vultr
Cloud GPU A100 80GB
|
$2.43/hr | On-Demand | US | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Google Cloud
a2-ultragpu-8g (8x A100)
|
$3.67/hr $29.39/hr for 8× node |
On-Demand | us-central1 | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
Azure
ND A100 v4 (8x A100)
|
$4.10/hr $32.77/hr for 8× node |
On-Demand | East US | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
|
AWS
p4de.24xlarge (8x A100)
|
$5.12/hr $40.97/hr for 8× node |
On-Demand | us-east-1 | 80 GB | Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time) |
Key hardware specifications for the NVIDIA A100 80GB.
| Architecture | Ampere (SXM) |
| VRAM | 80GB HBM2e |
| Memory Bandwidth | 2.0 TB/s |
| BF16 Throughput | 312 TFLOPS |
| FP8 Throughput | ~624 TFLOPS (est.) |
| GPU-to-GPU Bandwidth | 600 GB/s (NVSwitch) |
| TDP | 400W |
| Gen | 3rd Gen NVLink |
The NVIDIA A100 80GB is the proven GPU for mid-range AI workloads in 2026. With 80GB HBM2e memory and 312 TFLOPS BF16 compute, it handles fine-tuning runs, inference on models up to 70B parameters, and smaller training jobs that don't require H100-class throughput.
A100 80GB pricing has declined as H100 supply expanded. Lambda offers the cheapest on-demand A100 80GB at $1.29/hr — roughly 65% of the H100 on-demand rate for approximately 40% of H100 throughput. For cost-sensitive inference workloads at low-to-moderate volume, A100 80GB is often the correct choice.
The 80GB variant is meaningfully different from A100 40GB: the extra VRAM accommodates 70B parameter models in FP16 and enables larger batch sizes for training efficiency. For anything running models above 30B parameters, 80GB is required.
Spot/marketplace pricing on Vast.ai ($0.89/hr) makes A100 80GB one of the most accessible high-VRAM GPUs for price-sensitive teams. FluidStack ($1.21/hr) and CoreWeave ($1.62/hr) round out on-demand options below $2/hr. AWS and Google Cloud run $3–$5/hr per GPU due to multi-GPU instance overhead.
Use GridStackHub's GPU cost calculator to get a ranked comparison with hidden-cost breakdown (egress + storage) across all providers.
📊 Open Calculator View All GPU Pricing