Access real-time GPU cloud pricing across 32+ providers. Free read-only API — no auth required. Integrate into your tools, agents, or dashboards in minutes.
One request. No API key. No sign-up. You're querying live GPU pricing data in seconds.
Example response (truncated to 3 entries):
All endpoints return JSON. Public endpoints require no authentication. Base URL: https://gridstackhub.ai
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/gpu-pricing |
GET | None | All GPU pricing records (396+) |
/api/gpu-pricing?gpu_model=H100 |
GET | None | Filter by GPU model |
/api/gpu-pricing?provider=aws |
GET | None | Filter by provider |
/api/pulse-stack |
GET | None | Latest Blackwell Price Index |
/api/dataset/insights |
GET | None | Aggregated market insights |
/api/ai-query |
POST | None | Natural language GPU pricing query |
Returns all GPU pricing records from the database. Supports filtering by GPU model, provider, pricing type, and region. Records include source URL, VRAM, interconnect type, and last-updated timestamp.
| Parameter | Type | Required | Description |
|---|---|---|---|
| gpu_model | string | optional | Filter by GPU model name. Partial match supported. Examples: H100, A100, B200 |
| provider | string | optional | Filter by provider slug. Examples: aws, coreweave, lambda, runpod |
| pricing_type | string | optional | One of: on_demand, reserved, spot |
| region | string | optional | Filter by region. Examples: us-east, us-west, eu |
Returns the latest Blackwell Price Index data — current B200, B300, and GB200 pricing from all active providers. Updated daily.
Returns aggregated market statistics: cheapest providers by GPU model, average prices by category, price movement trends, and provider spotlights.
Ask questions about GPU pricing in plain English. Returns structured results with the cheapest matching options and an AI-generated summary. Rate limited to 20 requests/hour for anonymous users.
| Field | Type | Required | Description |
|---|---|---|---|
| query | string | required | Natural language question. Examples: "cheapest H100 under $3/hr", "which provider has the best B200 pricing?", "compare Lambda vs CoreWeave for A100" |
| context | string | optional | Additional context for the query (workload type, budget, region preference) |
Public endpoints are generous by default. Limits are per IP address and reset on a rolling hour window.
/api/gpu-pricing, /api/pulse-stack, /api/dataset/insightsPOST /api/ai-query — anonymous IP limitX-RateLimit-Remaining and X-RateLimit-Reset. If you're building a high-frequency integration, contact hello@gridstackhub.ai for a dedicated arrangement.All prices are sourced directly from provider pricing pages. Normalized to per-GPU hourly rates for consistent comparison. Data is refreshed daily via automated scraping.
GridStackHub is built for AI agents. We publish discovery files in every standard format so your agent can find, understand, and query our data without manual setup.
These features are in active development. Sign up to get notified when they launch.
Authenticated access with personal API keys. Higher rate limits, usage tracking, and programmatic access to Pro data.
Subscribe to price changes on specific GPU/provider combinations. Your endpoint gets notified when prices move beyond a threshold.
Download the full pricing dataset as JSON or CSV. Historical snapshots going back to launch. Updated daily, delivered to your S3 bucket or signed URL.