Provides access to comprehensive model catalogs and pricing intelligence for Google's AI inference services.
What Is This?
ATOM MCP Server lets any MCP-compatible AI agent (Claude, GPT, Cursor, Windsurf, VS Code Copilot) query real-time AI inference pricing data programmatically. Think of it as the Bloomberg Terminal for AI pricing, accessible via the Model Context Protocol.
Ask your AI assistant a question like "What's the cheapest way to run GPT-4o?" and it calls ATOM's tools behind the scenes, returning a data-backed answer from 1,600+ pricing SKUs across 40+ vendors globally.
Built by ATOM (A7OM) — the world's first methodological inference pricing index.
AIPI Indexes
14 benchmark indexes across four categories:
Category | Indexes | What It Answers |
Modality | TXT, MML, IMG, AUD, VID, VOC | What does this type of inference cost? |
Channel | DEV, CLD, PLT, NCL | Where should you buy — direct, marketplace, platform, or neocloud? |
Tier | FTR, BDG, RSN | What's the premium for flagship vs budget vs reasoning? |
Special | OSS | How much cheaper is open-source inference across all channels? |
All indexes are global (GLB), calculated weekly using chained matched-model methodology to eliminate composition bias.
Tools
Tool | Tier | Description |
| Free | All tracked vendors with country, region, channel type, and pricing page URLs |
| Free | 6 market KPIs: output premium, caching savings, open-source advantage, context cost curve, caching availability, size spread |
| Free | AIPI price benchmarks across 14 indexes — modality, channel, tier, and licensing |
| Tiered | Aggregate market intelligence: medians, quartiles, distributions, modality breakdown |
| Tiered | Multi-filter search: modality, vendor, creator, open-source, price range, context window, parameters |
| Tiered | Full specs + pricing across all vendors for a single model |
| Tiered | Cross-vendor price comparison for a model or model family |
| Tiered | Complete catalog for a specific vendor: all models, modalities, and pricing |
Pricing Tiers
ATOM MCP (Free) | ATOM MCP Pro ($49/mo) | |
Vendors, KPIs, AIPI indexes | ✅ Full data | ✅ Full data |
Market stats | Aggregates only | + Vendor-level breakdown |
Model search & comparison | Counts + price ranges | Full granular SKU data |
Model detail | Specs only | + Per-vendor pricing |
Vendor catalog | Summary only | Full SKU listing |
Free tier (no API key): Enough to understand the market — counts, ranges, distributions, benchmarks.
ATOM MCP Pro ($49/mo): Full granular data — every vendor, model, price, and spec. → a7om.com/mcp
Quick Start
Option 1: Remote URL — Claude.ai / Claude Desktop (recommended)
No install required. Connect directly to ATOM's hosted server:
Claude.ai (web): Settings → Connectors → Add custom connector
Name: ATOM Pricing Intelligence
URL: https://atom-mcp-server-production.up.railway.app/mcpClaude Desktop: Settings → Developer → Edit Config
{
"mcpServers": {
"atom-pricing": {
"url": "https://atom-mcp-server-production.up.railway.app/mcp"
}
}
}Note: Remote URL support requires a recent Claude Desktop version. If it doesn't work, use the npx method below.
Claude Desktop (via npx proxy):
{
"mcpServers": {
"atom-pricing": {
"command": "npx",
"args": ["mcp-remote", "https://atom-mcp-server-production.up.railway.app/mcp"]
}
}
}Option 2: Local (stdio) — for Cursor, Windsurf, etc.
git clone https://github.com/A7OM-AI/atom-mcp-server.git
cd atom-mcp-server
npm install && npm run buildAdd to your MCP client config:
{
"mcpServers": {
"atom-pricing": {
"command": "node",
"args": ["/path/to/atom-mcp-server/dist/index.js"],
"env": {
"SUPABASE_URL": "https://jonncmzxvxzwyaznokba.supabase.co",
"SUPABASE_ANON_KEY": "your-anon-key"
}
}
}
}Option 3: Deploy your own (Railway)
Set environment variables in Railway dashboard:
SUPABASE_URLSUPABASE_ANON_KEYATOM_API_KEYS(comma-separated, for paid tier validation)TRANSPORT=http
Example Queries
Once connected, just ask your AI assistant naturally:
"What's the cheapest way to run GPT-4o?"
"Compare Claude Sonnet 4.5 pricing across all vendors"
"Find open-source text models under $0.50 per million tokens"
"Show me Google's full model catalog"
"What are the AIPI benchmark prices for text inference?"
"How do neocloud prices compare to cloud marketplaces?"
"How much cheaper is open-source inference?"
"Give me a market overview of AI inference pricing"
"What are the key market KPIs for AI inference?"
Environment Variables
Variable | Required | Description |
| Yes | Supabase project URL |
| Yes | Supabase anonymous/public key |
| No | Comma-separated valid API keys for paid tier |
| No |
|
| No | HTTP port (default 3000) |
Tech Stack
TypeScript / Node.js
MCP SDK (
@modelcontextprotocol/sdk)Supabase (PostgreSQL) via REST API
Express (HTTP transport)
Zod (schema validation)
About ATOM
ATOM tracks 1,600+ AI inference pricing SKUs from 40+ vendors globally through the AIPI (ATOM Inference Price Index) system — the first methodological price benchmark for AI inference. 14 indexes span modality, channel, tier, and licensing dimensions, updated weekly using chained matched-model methodology to eliminate composition bias.
Vendors are classified across four distribution channels: Model Developers (direct API), Cloud Marketplaces (AWS Bedrock, Google Vertex, Azure), Inference Platforms (DeepInfra, Fireworks, Together AI), and Neoclouds (Groq, Cerebras).
Products: ATOM MCP · ATOM Terminal · ATOM Feed
License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.