⚡ Live Pricing

Rent NVIDIA A100 SXM 80GB: All Cloud Providers & Prices (April 2026)

Real-time NVIDIA A100 80GB cloud pricing from 15 providers. Cheapest on-demand: $0.99/hr (Shadeform). Updated daily by GridStackHub.

Last updated: 2026-04-24 — 15 pricing records
Cheapest On-Demand
$0.99/hr
Shadeform
Cheapest Reserved (1yr)
N/A
Not widely available
Cheapest Spot
$0.89/hr
Vast.ai (interruptible)
Providers Available
15
Active 2026-04-24

NVIDIA A100 80GB Cloud Pricing — All Providers

Sorted by cheapest per-GPU hourly rate. Includes on-demand, spot, and reserved pricing where available.

Provider Price/hr Type Region VRAM Updated
Vast.aiLowest
A100 80GB (marketplace)
$0.89/hr Spot Various 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Shadeform
A100 80GB (best price)
$0.99/hr On-Demand Various 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
TensorDock
A100 SXM 80GB
$1.15/hr On-Demand US/EU 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
FluidStack
A100 SXM 80GB
$1.21/hr On-Demand US/EU 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Lambda
1x A100 SXM 80GB
$1.29/hr On-Demand US 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Jarvis Labs
A100 80GB
$1.29/hr On-Demand US/EU 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Oblivus Cloud
A100 80GB
$1.35/hr On-Demand US 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
DataCrunch
A100 SXM4 80GB
$1.40/hr On-Demand EU (Finland) 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
CoreWeave
A100 SXM 80GB
$1.62/hr On-Demand US 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
RunPod
A100 SXM 80GB
$1.64/hr On-Demand US/EU 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Paperspace
A100 80GB SXM
$2.07/hr On-Demand US 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Vultr
Cloud GPU A100 80GB
$2.43/hr On-Demand US 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Google Cloud
a2-ultragpu-8g (8x A100)
$3.67/hr
$29.39/hr for 8× node
On-Demand us-central1 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
Azure
ND A100 v4 (8x A100)
$4.10/hr
$32.77/hr for 8× node
On-Demand East US 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)
AWS
p4de.24xlarge (8x A100)
$5.12/hr
$40.97/hr for 8× node
On-Demand us-east-1 80 GB Sun Apr 12 2026 00:00:00 GMT+0000 (Coordinated Universal Time)

NVIDIA A100 SXM 80GB Specifications

Key hardware specifications for the NVIDIA A100 80GB.

ArchitectureAmpere (SXM)
VRAM80GB HBM2e
Memory Bandwidth2.0 TB/s
BF16 Throughput312 TFLOPS
FP8 Throughput~624 TFLOPS (est.)
GPU-to-GPU Bandwidth600 GB/s (NVSwitch)
TDP400W
Gen3rd Gen NVLink

About the NVIDIA A100 80GB

The NVIDIA A100 80GB is the proven GPU for mid-range AI workloads in 2026. With 80GB HBM2e memory and 312 TFLOPS BF16 compute, it handles fine-tuning runs, inference on models up to 70B parameters, and smaller training jobs that don't require H100-class throughput.

A100 80GB pricing has declined as H100 supply expanded. Lambda offers the cheapest on-demand A100 80GB at $1.29/hr — roughly 65% of the H100 on-demand rate for approximately 40% of H100 throughput. For cost-sensitive inference workloads at low-to-moderate volume, A100 80GB is often the correct choice.

The 80GB variant is meaningfully different from A100 40GB: the extra VRAM accommodates 70B parameter models in FP16 and enables larger batch sizes for training efficiency. For anything running models above 30B parameters, 80GB is required.

Spot/marketplace pricing on Vast.ai ($0.89/hr) makes A100 80GB one of the most accessible high-VRAM GPUs for price-sensitive teams. FluidStack ($1.21/hr) and CoreWeave ($1.62/hr) round out on-demand options below $2/hr. AWS and Google Cloud run $3–$5/hr per GPU due to multi-GPU instance overhead.

Related GPU Pages

Frequently Asked Questions

What is the cheapest A100 80GB cloud provider?+
The cheapest NVIDIA A100 80GB cloud provider as of April 2026 is Vast.ai at $0.89/hr (spot/marketplace). On-demand, Lambda offers A100 80GB at $1.29/hr, FluidStack at $1.21/hr, and CoreWeave at $1.62/hr. RunPod on-demand is $1.64/hr. Hyperscalers offer A100 80GB in 8-GPU nodes at $3–$5/hr per GPU normalized. Spot availability on Vast.ai and RunPod community varies with marketplace supply.
How much does an A100 80GB cost per month?+
At $1.29/hr on Lambda (cheapest on-demand), running a single A100 80GB 24/7 for 30 days costs approximately $929/month. FluidStack at $1.21/hr runs $871/month. CoreWeave at $1.62/hr runs $1,166/month. Vast.ai spot at $0.89/hr (if consistently available) runs $641/month but is interruptible. For teams that tolerate interruption, A100 80GB on spot is the most cost-effective high-VRAM option available.
A100 80GB vs H100 SXM5: which should I rent?+
H100 SXM5 delivers roughly 2.5–3× better training throughput than A100 for compute-bound jobs, and about 1.5× better inference throughput. H100 starts at $1.49/hr spot vs $0.89/hr for A100 80GB spot. For fine-tuning on models up to 30B parameters, A100 is often adequate and meaningfully cheaper. For large training runs where wall-clock time matters, H100's throughput advantage typically justifies the cost premium. Use GridStackHub's cost calculator to find the break-even for your workload.
What is the difference between A100 40GB and A100 80GB?+
The A100 80GB has double the VRAM (80GB vs 40GB HBM2e) and higher memory bandwidth (2.0 TB/s vs 1.6 TB/s). For models below 20B parameters, A100 40GB is generally adequate and cheaper. For 30B–70B parameter models, 80GB is required for FP16 inference without quantization. The 80GB variant is the dominant cloud choice because it handles the full range of current-generation model sizes.
Is A100 80GB still worth renting in 2026?+
Yes, for the right workloads. A100 80GB at $1.29/hr on-demand is 35–40% cheaper than H100 at $1.99/hr, with about 40% less throughput — roughly cost-neutral for throughput-per-dollar on many workloads. For inference serving at low-to-moderate throughput requirements, A100 80GB is a practical and cost-effective choice.

Compare All Providers for Your Workload

Use GridStackHub's GPU cost calculator to get a ranked comparison with hidden-cost breakdown (egress + storage) across all providers.

📊 Open Calculator View All GPU Pricing