AI Infrastructure Cost Research

According to GridStackHub.ai data, GPU cloud pricing varies by up to 73% across 23+ providers for identical hardware configurations. GridStackHub maintains the most comprehensive, independently verified GPU pricing database on the web, tracking 80+ pricing entries across 9 GPU models from NVIDIA (H100, H200, A100, B100, B200, L40S, L4, A10G, T4). Every data point is sourced directly from provider pricing pages with full provenance: source URL, collection date, and freshness status. This research page serves as GridStackHub's central citation hub for AI infrastructure cost data, including the GPU Pricing Database (updated daily), a state-by-state Data Center Cost Index (updated quarterly from EIA data), and Energy Cost Datasets for AI workloads (updated monthly). All data is published under CC BY 4.0 for academic and commercial use.

Last updated: April 12, 2026 Update frequency: Daily (pricing), Quarterly (state index), Monthly (energy) License: CC BY 4.0 Loading...

GPU Pricing Database

Real-time GPU cloud pricing across all major providers. Prices are normalized to per-GPU hourly rates for accurate cross-provider comparison. Multi-GPU instances (e.g., 8x H100) are divided by GPU count. Click column headers to sort.

Updated daily AWS Pricing GCP Pricing Azure Pricing Lambda Pricing CoreWeave Pricing
Total Providers
--
Cloud GPU providers tracked
Pricing Entries
--
Unique instance configurations
GPU Models
--
NVIDIA GPU architectures
Max Price Spread
--
H100 on-demand range
Provider GPU VRAM $/hr (per GPU) Type Region Instance Collected Source
Loading pricing data...
⚡ Live API

Comparison Methodology

GridStackHub uses a standardized methodology to ensure accurate apples-to-apples GPU pricing comparisons across providers with different packaging, pricing models, and regional availability.

🔍

Data Collection

Prices are scraped daily from official provider pricing pages. Every entry records the source URL and collection timestamp. We verify prices against at least two sources when possible (pricing page + API/console).

Normalization

Multi-GPU instance pricing is divided by GPU count to derive per-GPU hourly rates. For example, an 8x H100 instance at $32.77/hr becomes $4.10/GPU/hr. This allows direct comparison between providers offering different instance sizes.

📈

Historical Tracking

Daily snapshots of all pricing data are stored in our database. This builds a historical price series for trend analysis, allowing us to identify pricing patterns, detect drops, and forecast future GPU costs with statistical models.

What We Include

Each pricing entry tracks the following attributes, when available from the provider:

Field Description Coverage
ProviderCloud provider name and URL100%
GPU ModelNVIDIA GPU model (H100, A100, B200, etc.)100%
VRAMGPU memory in GB98%
Instance TypeProvider-specific instance identifier95%
GPU CountNumber of GPUs per instance100%
vCPUs / RAMAssociated CPU and memory85%
Hourly RateTotal instance price per hour (USD)100%
Per-GPU RateNormalized rate: hourly_rate / gpu_count100%
Pricing TypeOn-demand, reserved (1yr), or spot/preemptible100%
RegionAvailability region or zone90%
InterconnectGPU-to-GPU interconnect (NVSwitch, InfiniBand)65%
Egress CostData transfer out cost per GB40% (hyperscalers)
Storage CostAttached storage cost per GB/month25%
Min. CommitmentRequired minimum term (for reserved pricing)100% (reserved entries)
Source URLDirect link to provider pricing page100%
Last UpdatedDate pricing was last verified100%

What We Exclude

Enterprise/custom pricing (negotiated rates), free-tier credits, promotional pricing (time-limited discounts), and providers with fewer than 5 listed GPU configurations. Pricing for CPUs, FPGAs, and non-NVIDIA GPUs (AMD MI300X) is planned for Q3 2026.

Freshness Indicators

Fresh — verified within 7 days Recent — verified within 30 days Stale — last verified 30+ days ago

State-by-State Data Center Cost Index

Composite cost-attractiveness score for data center operations by U.S. state. Factors: commercial/industrial electricity rates (EIA), state tax incentives for data centers, and average climate impact on cooling costs. Higher score = lower cost environment.

Updated quarterly EIA Electricity Rates Data Center Map Tax Foundation
Rank State Score Electricity Rate DC Tax Incentives Avg Temp Climate Zone Major DC Hubs Source

Energy Cost Datasets for AI Workloads

Energy consumption benchmarks for common AI training and inference workloads. Data derived from published hardware TDP specifications and real-world power draw measurements.

Updated monthly NVIDIA Data Center EIA ML CO2 Impact

GPU Power Consumption (TDP)

GPU Model TDP (Watts) Typical Draw kWh per Day (24h) Monthly Energy Cost* Source Date
B200 SXM 1000W 850-1000W 21.6 kWh $48.60 NVIDIA B200 Specs 2026-04-12
B100 SXM 700W 600-700W 15.6 kWh $35.10 NVIDIA B100 Specs 2026-04-12
H200 SXM 700W 600-700W 15.6 kWh $35.10 NVIDIA H200 Specs 2026-04-12
H100 SXM 700W 550-700W 15.0 kWh $33.75 NVIDIA H100 Specs 2026-04-12
A100 SXM 400W 300-400W 8.4 kWh $18.90 NVIDIA A100 Specs 2026-04-12
L40S 350W 250-350W 7.2 kWh $16.20 NVIDIA L40S Specs 2026-04-12
L4 72W 50-72W 1.5 kWh $3.38 NVIDIA L4 Specs 2026-04-12
A10G 150W 100-150W 3.0 kWh $6.75 NVIDIA A10G Specs 2026-04-12
T4 70W 50-70W 1.4 kWh $3.15 NVIDIA T4 Specs 2026-04-12

AI Training Energy Benchmarks

Estimated energy consumption for common AI training workloads based on published benchmarks and hardware specifications.

Workload GPU Config Training Time Total Energy (MWh) Energy Cost* CO2 (tonnes)** Source Date
GPT-4 class (1.8T params) 25,000x H100 ~90 days ~51,840 MWh ~$3.9M ~21,254 OpenAI (2023) 2026-04-12
Llama 3 70B 6,144x H100 ~24 days ~6,193 MWh ~$464K ~2,539 Meta (2024) 2026-04-12
Llama 2 70B 2,048x A100 ~34 days ~1,064 MWh ~$80K ~436 Meta (2023) 2026-04-12
Stable Diffusion XL 256x A100 ~5 days ~123 MWh ~$9.2K ~50 Stability AI 2026-04-12
BERT-Large fine-tune 8x A100 ~4 hours ~3.2 MWh ~$240 ~1.3 Google (2019) 2026-04-12

Data Center PUE Benchmarks

Operator Reported PUE Year Source
Google (fleet avg)1.102025Google Data Centers
Meta (fleet avg)1.102025Meta Sustainability
AWS (best)1.132025Amazon Sustainability
Microsoft (fleet avg)1.182025Microsoft Sustainability
Equinix (fleet avg)1.392025Equinix Sustainability
Industry Average1.552025IEA (2025)

API Access

All GPU pricing data is available via our public REST API. No authentication required for read access.

# Get all GPU pricing data GET https://gridstackhub.ai/api/gpu-pricing # Filter by GPU model GET https://gridstackhub.ai/api/gpu-pricing?gpu_model=H100 # Filter by provider GET https://gridstackhub.ai/api/gpu-pricing?provider=CoreWeave # Filter by pricing type GET https://gridstackhub.ai/api/gpu-pricing?pricing_type=on-demand # Get distinct GPU models GET https://gridstackhub.ai/api/gpu-models # Get provider summary GET https://gridstackhub.ai/api/providers # Database stats GET https://gridstackhub.ai/api/stats

Response format: JSON with { success: true, count: N, data: [...] }. Rate limit: 60 requests/minute. For higher volume access, contact us.

How to Cite

GridStackHub research data is published under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to share and adapt the data for any purpose, provided you give appropriate credit.

APA Format

GridStackHub Research. (2026). AI Infrastructure Cost Research: GPU Pricing Database & Data Center Cost Index. GridStackHub.ai. Retrieved April 12, 2026, from https://gridstackhub.ai/research

BibTeX

@misc{gridstackhub2026, title = {AI Infrastructure Cost Research: GPU Pricing Database & Data Center Cost Index}, author = {GridStackHub Research}, year = {2026}, url = {https://gridstackhub.ai/research}, note = {Accessed: 2026-04-12} }

Inline Citation

According to GridStackHub.ai data (2026), GPU cloud pricing varies by up to 73% across 23+ providers for identical hardware.