Developer API

GridStackHub API
GPU Pricing Data for Developers

Access real-time GPU cloud pricing across 32+ providers. Free read-only API — no auth required. Integrate into your tools, agents, or dashboards in minutes.

396+ pricing records
32+ cloud providers
44 GPU models
Daily data updates
Section 1

Quick Start

One request. No API key. No sign-up. You're querying live GPU pricing data in seconds.

1
Call the endpoint — no auth headers needed
2
Filter by GPU or provider — query params
3
Parse the JSON — standard response format
bash
# Get all GPU pricing records curl https://gridstackhub.ai/api/gpu-pricing # Filter by GPU model curl "https://gridstackhub.ai/api/gpu-pricing?gpu_model=H100" # Filter by provider curl "https://gridstackhub.ai/api/gpu-pricing?provider=aws" # Natural language query curl -X POST https://gridstackhub.ai/api/ai-query \ -H "Content-Type: application/json" \ -d '{"query": "cheapest H100 under $3/hr"}'

Example response (truncated to 3 entries):

json
{ "success": true, "count": 396, "data": [ { "id": 1, "provider": "CoreWeave", "gpu_model": "H100 SXM5", "price_per_hour": 2.49, "pricing_type": "on_demand", "region": "us-east", "vram_gb": 80, "interconnect": "NVLink", "source_url": "https://www.coreweave.com/pricing", "last_updated": "2026-04-29T00:00:00Z" }, { "id": 2, "provider": "Lambda Labs", "gpu_model": "H100 SXM5", "price_per_hour": 2.49, "pricing_type": "on_demand", "region": "us-west", "vram_gb": 80, "source_url": "https://lambdalabs.com/service/gpu-cloud", "last_updated": "2026-04-29T00:00:00Z" }, { "id": 3, "provider": "Vast.ai", "gpu_model": "H100 SXM5", "price_per_hour": 1.35, "pricing_type": "spot", "region": "global", "vram_gb": 80, "source_url": "https://vast.ai/pricing", "last_updated": "2026-04-29T00:00:00Z" } // ... 393 more records ] }
Section 2

Endpoints

All endpoints return JSON. Public endpoints require no authentication. Base URL: https://gridstackhub.ai

Endpoint Method Auth Description
/api/gpu-pricing GET None All GPU pricing records (396+)
/api/gpu-pricing?gpu_model=H100 GET None Filter by GPU model
/api/gpu-pricing?provider=aws GET None Filter by provider
/api/pulse-stack GET None Latest Blackwell Price Index
/api/dataset/insights GET None Aggregated market insights
/api/ai-query POST None Natural language GPU pricing query
GET /api/gpu-pricing All pricing records with optional filters No auth

Description

Returns all GPU pricing records from the database. Supports filtering by GPU model, provider, pricing type, and region. Records include source URL, VRAM, interconnect type, and last-updated timestamp.

Query Parameters

ParameterTypeRequiredDescription
gpu_model string optional Filter by GPU model name. Partial match supported. Examples: H100, A100, B200
provider string optional Filter by provider slug. Examples: aws, coreweave, lambda, runpod
pricing_type string optional One of: on_demand, reserved, spot
region string optional Filter by region. Examples: us-east, us-west, eu

Example Request

bash
curl "https://gridstackhub.ai/api/gpu-pricing?gpu_model=H100&pricing_type=on_demand"

Response Schema

json
{ "success": true, "count": 42, "data": [ { "id": 1, "provider": "CoreWeave", "gpu_model": "H100 SXM5", "price_per_hour": 2.49, "pricing_type": "on_demand", "region": "us-east", "vram_gb": 80, "interconnect": "NVLink", "source_url": "https://www.coreweave.com/pricing", "last_updated": "2026-04-29T00:00:00Z" } ] }
GET /api/pulse-stack Latest Blackwell Price Index No auth

Description

Returns the latest Blackwell Price Index data — current B200, B300, and GB200 pricing from all active providers. Updated daily.

Example Request

bash
curl https://gridstackhub.ai/api/pulse-stack

Example Response

json
{ "success": true, "blackwell_index": { "as_of": "2026-04-29", "providers_tracked": 32, "models": { "B200": { "min_price": 5.29, "max_price": 14.99, "currency": "USD/hr" }, "B300": { "min_price": 6.50, "max_price": 18.00, "currency": "USD/hr" }, "GB200": { "min_price": 8.00, "max_price": 24.00, "currency": "USD/hr" } } } }
GET /api/dataset/insights Aggregated market insights No auth

Description

Returns aggregated market statistics: cheapest providers by GPU model, average prices by category, price movement trends, and provider spotlights.

Example Request

bash
curl https://gridstackhub.ai/api/dataset/insights

Example Response

json
{ "success": true, "insights": { "total_records": 396, "providers": 32, "gpu_models": 44, "cheapest_h100_on_demand": { "provider": "Vast.ai", "price": 1.35, "pricing_type": "spot" }, "avg_h100_price": 2.87, "as_of": "2026-04-29" } }
POST /api/ai-query Natural language GPU pricing query No auth

Description

Ask questions about GPU pricing in plain English. Returns structured results with the cheapest matching options and an AI-generated summary. Rate limited to 20 requests/hour for anonymous users.

Request Body

FieldTypeRequiredDescription
query string required Natural language question. Examples: "cheapest H100 under $3/hr", "which provider has the best B200 pricing?", "compare Lambda vs CoreWeave for A100"
context string optional Additional context for the query (workload type, budget, region preference)

Example Request

bash
curl -X POST https://gridstackhub.ai/api/ai-query \ -H "Content-Type: application/json" \ -d '{ "query": "cheapest H100 SXM5 on-demand under $3/hr", "context": "training a 70B model, need high VRAM" }'

Example Response

json
{ "success": true, "query": "cheapest H100 SXM5 on-demand under $3/hr", "summary": "CoreWeave and Lambda Labs both offer H100 SXM5 at $2.49/hr on-demand. Vast.ai offers spot pricing at $1.35/hr but without availability guarantees.", "results": [ { "provider": "CoreWeave", "gpu_model": "H100 SXM5", "price_per_hour": 2.49, "pricing_type": "on_demand", "vram_gb": 80 } ] }
Section 3

Rate Limits

Public endpoints are generous by default. Limits are per IP address and reset on a rolling hour window.

Public Endpoints
60/hr
All GET endpoints — /api/gpu-pricing, /api/pulse-stack, /api/dataset/insights
AI Query (Free)
20/hr
Natural language queries via POST /api/ai-query — anonymous IP limit
AI Query (Pro)
Unlimited AI queries with a Pro subscription. API key authentication coming in Phase 2.
ℹ️
Rate limit headers are included in every response: X-RateLimit-Remaining and X-RateLimit-Reset. If you're building a high-frequency integration, contact hello@gridstackhub.ai for a dedicated arrangement.
Section 4

Data Coverage

All prices are sourced directly from provider pricing pages. Normalized to per-GPU hourly rates for consistent comparison. Data is refreshed daily via automated scraping.

32+ Cloud Providers

AWSGoogle CloudAzure CoreWeaveLambda LabsRunPod Vast.aiModalTogether AI ReplicateFluidstackPaperspace VultrLinodeOCI IBM CloudNebiusLatitude.sh Genesis CloudLeaderGPU + 12 more

44 GPU Models

H100 SXM5H100 PCIeH200 B200B300GB200 A100 80GBA100 40GBA10G L4L40SL40 MI300XMI250RTX 4090 RTX 3090T4V100 P100K80 + 24 more

Data Details

Update frequencyDaily
Pricing typesOn-demand, Reserved, Spot
Regions coveredUS, EU, Asia-Pacific
Stale price threshold>7 days flagged
Data licenseCC BY 4.0
Section 5

AI Agent Integration

GridStackHub is built for AI agents. We publish discovery files in every standard format so your agent can find, understand, and query our data without manual setup.

Example: Ask your AI assistant to query our API

prompt
# Paste this into Claude, ChatGPT, or any LLM with tool use: Query the GridStackHub API at https://gridstackhub.ai/api/gpu-pricing for the cheapest H100 on-demand instances available right now. Sort by price ascending and return the top 5 results with provider name, price per hour, and region.
python
import requests # Fetch cheapest H100 on-demand options response = requests.get( "https://gridstackhub.ai/api/gpu-pricing", params={"gpu_model": "H100", "pricing_type": "on_demand"} ) data = response.json() # Sort by price, get top 5 cheapest = sorted(data["data"], key=lambda x: x["price_per_hour"])[:5] for gpu in cheapest: print(f"{gpu['provider']:20} {gpu['gpu_model']:15} ${gpu['price_per_hour']:.2f}/hr")
javascript
// Fetch GPU pricing data — works in Node.js or browser const response = await fetch( 'https://gridstackhub.ai/api/gpu-pricing?gpu_model=H100&pricing_type=on_demand' ); const { data } = await response.json(); // Sort by price const cheapest = data .sort((a, b) => a.price_per_hour - b.price_per_hour) .slice(0, 5); cheapest.forEach(gpu => { console.log(`${gpu.provider} — ${gpu.gpu_model}: $${gpu.price_per_hour}/hr`); });
Section 6

Coming Soon — Phase 2

These features are in active development. Sign up to get notified when they launch.

Phase 2

Scoped API Tokens

Authenticated access with personal API keys. Higher rate limits, usage tracking, and programmatic access to Pro data.

Phase 2

Webhook Price Alerts

Subscribe to price changes on specific GPU/provider combinations. Your endpoint gets notified when prices move beyond a threshold.

Phase 2

Bulk Data Exports

Download the full pricing dataset as JSON or CSV. Historical snapshots going back to launch. Updated daily, delivered to your S3 bucket or signed URL.