Skip to main content
Glama
A7OM-AI

atom-mcp-server

by A7OM-AI

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PORTNoHTTP port (default 3000)3000
TRANSPORTNoTransport protocol: 'stdio' (default) or 'http'stdio
SUPABASE_URLYesSupabase project URL
ATOM_API_KEYSNoComma-separated valid API keys for paid tier
SUPABASE_ANON_KEYYesSupabase anonymous/public key

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
search_modelsA

Search and filter AI inference models across 40+ vendors and 1,600+ SKUs.

Query by modality (Text, Image, Audio, Video, Multimodal), vendor, creator, model family, open-source status, price range, context window, and parameter count.

Returns matching models with pricing. Free tier shows count + price range; paid tier shows full details.

Examples:

  • "Find open-source text models under $1/M tokens" → open_source=true, modality="Text", max_price=0.001

  • "What multimodal models does Google offer?" → vendor="Google", modality="Multimodal"

  • "Models with 128K+ context window" → min_context_window=128000

get_model_detailA

Deep dive on a single AI model: technical specs + pricing across all vendors.

Returns model_registry data (context window, parameters, open-source status, training cutoff, model family) plus all SKU pricing across every vendor that offers this model.

Examples:

  • "Tell me everything about GPT-4o" → model_name="GPT-4o"

  • "Claude Sonnet 4.5 specs and pricing" → model_name="Claude Sonnet 4.5"

compare_pricesA

Cross-vendor price comparison for a specific model or model family.

Shows the same model (or family) priced across different vendors, sorted cheapest first. Essential for cost optimization and vendor selection.

Examples:

  • "Compare Llama 3.1 70B pricing across vendors" → model_name="Llama 3.1 70B"

  • "Cheapest GPT-4 family output pricing" → model_family="GPT-4", direction="Output"

  • "Claude pricing comparison" → model_family="Claude"

get_vendor_catalogA

Full catalog for a specific vendor: all models, modalities, and pricing.

Returns vendor metadata (country, region, pricing page URL) plus every model and SKU they offer.

Examples:

  • "What does Together AI sell?" → vendor="Together AI"

  • "OpenAI's text model pricing" → vendor="OpenAI", modality="Text"

  • "Amazon Bedrock catalog" → vendor="Amazon Bedrock"

get_market_statsA

Aggregate AI inference market intelligence.

Returns total vendor/model/SKU counts, price distribution (median, mean, quartiles, min/max), and modality breakdown. Optionally filter by modality.

Examples:

  • "AI inference market overview" → (no params)

  • "Text model pricing statistics" → modality="Text"

  • "Image generation market stats" → modality="Image"

get_index_benchmarksA

AIPI (ATOM Inference Price Index) — chained matched-model price benchmarks for AI inference.

Returns 14 benchmark indexes across four categories:

  • Modality (6): Text, Multimodal, Image, Audio, Video, Voice — what does this type of inference cost?

  • Channel (4): Model Developers, Cloud Marketplaces, Inference Platforms, Neoclouds — where should you buy?

  • Tier (3): Frontier, Budget, Reasoning — what's the premium for capability?

  • Special (1): Open-Source — how much cheaper is open-weight inference?

Each index includes input, cached input, and output pricing per period.

These are market-wide benchmarks, not individual vendor prices. Use them to understand where the market is and how it's moving.

Fully public — available to all tiers.

Examples:

  • "What's the current benchmark for text inference?" → index_category="Modality"

  • "Show me all AIPI indexes" → (no params)

  • "Neocloud pricing benchmark" → index_code="AIPI NCL GLB"

  • "Channel pricing comparison" → index_category="Channel"

  • "Open-source vs market pricing" → index_code="AIPI OSS GLB"

get_kpisA

ATOM Inference Price Index (AIPI) market-level KPIs.

Returns 6 key performance indicators derived from live pricing data:

  • Output Premium: how much more output tokens cost vs input

  • Caching Savings: average discount for cached input pricing

  • Open Source Advantage: price difference between open-source and proprietary

  • Context Cost Curve: price multiplier for larger context windows

  • Caching Availability: % of models offering cached pricing

  • Size Spread: price ratio between largest and smallest models

These KPIs are available to all tiers — they demonstrate ATOM's market intelligence.

list_vendorsA

List all AI inference vendors tracked by ATOM.

Returns vendor name, country, region, and pricing page URL. Vendors span four channel types: Model Developers, Cloud Marketplaces, Inference Platforms, and Neoclouds. Optionally filter by region or country.

Examples:

  • "List all vendors" → (no params)

  • "European AI vendors" → region="Europe"

  • "Chinese AI vendors" → country="China"

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/A7OM-AI/atom-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server