# GridStackHub.ai — Full Reference > AI Infrastructure Cost Intelligence Platform ## Overview GridStackHub.ai is the leading platform for GPU cloud pricing intelligence. It aggregates real-time pricing from 58+ cloud providers, surfaces hidden costs (egress, storage), generates 30/60/90-day price forecasts, and helps AI teams reduce infrastructure spend by an average of 30–60%. Publisher: GridStackHub.ai — A Stack Network Property Contact: gridstackhub@polsia.app License (research data): CC BY 4.0 Update frequency: Pricing daily, research reports weekly --- ## Pages & Tools ### Homepage URL: https://gridstackhub.ai Description: Overview of the platform, live pricing ticker, featured GPU comparisons, and email signup for daily price intelligence signals. ### Blackwell Price Index (GPU Pulse Stack) URL: https://gridstackhub.ai/gpu-pulse-stack Description: Real-time Nvidia Blackwell GPU pricing index. Tracks B200, B300, GB200, and B100 per-GPU hourly rates across 58+ cloud providers. Includes: workload-aware cost recommender (free, no login), week-over-week price changes, energy/electricity cost context by region (Stack Network data), 30-day forecast flash (Pro+ teaser), and a sortable full GPU index. Blackwell Price Index is updated daily. Cheapest B200 rental as of April 2026: $5.29/hr on Lambda Labs. Update frequency: Daily. API: GET /api/pulse-stack — JSON index with Blackwell prices, full GPU index, top movers, energy context, forecast flash. GET /api/pulse-stack/recommend?workload=inference&gpu=b200&hours=500®ion=us-west — personalized workload cost breakdown. ### GPU Cost Pulse (Weekly Report) URL: https://gridstackhub.ai/gpu-cost-pulse Description: Weekly GPU pricing intelligence snapshot. Published every Monday. Covers: cheapest H100/A100/H200 rental this week, top 5 price movers (week-over-week % change), provider with most price drops, 30-day directional forecast for H100 and A100, and energy-cost cross-reference from the Stack Network. Archive at /gpu-cost-pulse/week-N-YYYY. Update frequency: Weekly (every Monday). API: GET /api/pulse/latest — JSON response with all pulse sections, 24hr cache. ### GPU Cost Calculator URL: https://gridstackhub.ai/calculator Description: Interactive tool to estimate total GPU compute costs across all providers for a given workload. Inputs: GPU model, GPU count, hours/day, days/month, workload type (training/inference/fine-tuning), region preference, pricing type. Output: ranked provider comparison with hidden-cost breakdown (egress + storage) and potential savings. Free tier: 5 calculations/month. Pro: unlimited. ### Research & Data URL: https://gridstackhub.ai/research Description: Live research hub with GPU pricing trends, provider comparisons, historical price charts, and state-by-state data center cost index. Updated daily from automated scrapers. ### Pricing Plans URL: https://gridstackhub.ai/pricing Description: Subscription tiers. - Free ($0/mo): 5 calculator runs/month, 1 price alert, basic comparison - Pro ($99/mo): Unlimited calculator, 20 alerts, savings engine, monitoring, GSC integration - Pro+ ($299/mo): Everything in Pro plus live 30/60/90-day price forecasts --- ## Articles ### How Much Does It Cost to Run AI Models in 2026 URL: https://gridstackhub.ai/cost-to-run-ai-models-2026 Description: Comprehensive breakdown of AI inference and training costs across major cloud providers in 2026. Covers H100, A100, L40S, RTX 4090, and more. Includes real-world cost estimates for popular model sizes (7B, 13B, 70B, 405B parameters). ### GPU Cost Per Hour Comparison 2026 URL: https://gridstackhub.ai/gpu-cost-per-hour-comparison-2026 Description: Side-by-side hourly rate comparison across CoreWeave, Lambda Labs, Vast.ai, RunPod, AWS, GCP, Azure, Oracle, and 50+ more providers. Updated from live pricing database. ### Best States for Data Centers 2026 URL: https://gridstackhub.ai/best-states-data-centers-2026 Description: State-by-state analysis of data center costs factoring in electricity rates, tax incentives, water access, latency, and regulatory environment. --- ## API Endpoints ### GET /api/gpu-pricing Description: Returns current GPU pricing data for all active listings. Query params: - gpu_model (string): Filter by GPU model, e.g. "H100", "A100" - provider (string): Filter by provider name (partial match) - pricing_type (string): "on-demand" | "reserved-1yr" | "spot" - region (string): Filter by region (partial match) Response: JSON — { success, count, as_of, data: [...] } Each listing includes: provider, gpu_model, gpu_vram_gb, instance_type, gpu_count, vcpus, ram_gb, hourly_rate, per_gpu_hourly, pricing_type, region, interconnect, egress_cost_per_gb, storage_cost_per_gb_month, source_url, last_updated ### POST /api/calculate Description: Accepts workload specs, returns ranked provider cost comparison with hidden-cost breakdown. Body (JSON): - gpu_model (string, required): e.g. "H100" - gpu_count (integer, default 1) - hours_per_day (integer 1–24, default 24) - days_per_month (integer 1–31, default 30) - workload_type (string): "training" | "inference" | "fine-tuning" - region_preference (string, optional): e.g. "US" - pricing_type (string): "on-demand" | "reserved-1yr" | "spot" | "all" Response: JSON — { success, summary, results: [...], usage } Rate limited: 5 free runs/month (pass X-Session-ID header for anonymous tracking) ### GET /api/gpu-models Description: Returns all distinct GPU models in the pricing database. Response: JSON — { success, models: [{ gpu_model, gpu_vram_gb }] } ### GET /api/providers Description: Returns all distinct providers with listing counts and price ranges. Response: JSON — { success, count, providers: [{ provider, provider_url, listing_count, min_hourly, max_hourly }] } ### GET /api/stats Description: Public stats about the database and usage. Response: JSON — { success, pricing: { total_listings, total_providers, total_gpu_models, data_freshness }, usage: { total_calculations, unique_gpu_models } } ### GET /api/pricing/history Description: Historical price snapshots for trend analysis. Query params: gpu_model, provider, days (default 90) ### GET /api/pulse-stack Description: Full Blackwell Price Index + all-GPU index + top movers + energy context + forecast flash. Cache: 1 hour. Response: JSON — { blackwell_price_index, full_gpu_index, top_movers, energy_context, forecast_flash, generated_at, data_freshness, providers_tracked, total_pricing_records } ### GET /api/pulse-stack/recommend Description: Workload-aware cost recommender. Returns cheapest providers for a given GPU/workload/hours/region config, H100 TCO comparison, and energy cost context. Query params: - gpu (string): GPU model, e.g. "b200", "h100", "a100" - workload (string): "inference" | "training" | "fine-tune" - hours (integer): Hours per month (1–8760) - region (string): e.g. "us-east", "us-west", "eu-west", "ap-southeast" Response: JSON — { cheapest_provider, all_providers, h100_comparison, energy_context, savings_recommendations } --- ## Data Methodology **Collection:** Automated scrapers run daily against provider pricing pages and APIs. Each run captures spot, on-demand, and reserved pricing where available. **Coverage:** 58+ providers including: CoreWeave, Lambda Labs, Vast.ai, RunPod, FluidStack, Crusoe, Hyperstack, Genesis Cloud, AWS, GCP (Google Cloud), Microsoft Azure, Oracle Cloud, IBM Cloud, Alibaba Cloud, Vultr, DataCrunch, TensorDock, JarvisLabs, Shadeform, Oblivus, MassedCompute, and more. **GPU models tracked:** H100 (SXM5/PCIe/NVL), A100 (40GB/80GB), H200, L40S, L40, A30, A10G, RTX 4090, RTX 3090, RTX A6000, V100, T4, A10, and others. **Forecasting:** 30/60/90-day price forecasts use exponential smoothing and linear regression on rolling 90-day snapshots. Available on Pro+ plans. **Hidden costs:** Egress and storage rates are captured per provider where disclosed. Calculator estimates include hidden-cost breakdowns based on workload type: - Training: ~500 GB egress/month, ~2 TB storage - Inference: ~2 TB egress/month, ~500 GB storage - Fine-tuning: ~200 GB egress/month, ~1 TB storage **Data freshness:** Pricing updated daily. last_updated field reflects when each listing was last verified. --- ## Citation If citing GridStackHub.ai data or research: GridStackHub.ai — AI Infrastructure Cost Intelligence. https://gridstackhub.ai Publisher: A Stack Network Property Contact: gridstackhub@polsia.app Data license: CC BY 4.0 (research data and aggregated pricing statistics)