Skip to main content
Glama
A7OM-AI

atom-mcp-server

by A7OM-AI

Search AI Models

search_models
Read-onlyIdempotent

Find AI inference models across 40+ vendors and 1,600+ SKUs. Filter by price, modality, context window, and vendor to compare specifications and pricing between open-source and commercial options.

Instructions

Search and filter AI inference models across 40+ vendors and 1,600+ SKUs.

Query by modality (Text, Image, Audio, Video, Multimodal), vendor, creator, model family, open-source status, price range, context window, and parameter count.

Returns matching models with pricing. Free tier shows count + price range; paid tier shows full details.

Examples:

  • "Find open-source text models under $1/M tokens" → open_source=true, modality="Text", max_price=0.001

  • "What multimodal models does Google offer?" → vendor="Google", modality="Multimodal"

  • "Models with 128K+ context window" → min_context_window=128000

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modalityNoFilter by modality: Text, Image, Audio, Video, Voice, Multimodal, Embedding
vendorNoFilter by vendor name, e.g. 'OpenAI', 'Anthropic'
creatorNoFilter by model creator/developer
model_familyNoFilter by model family, e.g. 'GPT-4o', 'Claude 3.5'
open_sourceNoFilter by open-source status: 'true' or 'false'
directionNoFilter by pricing direction
max_priceNoMaximum normalized price (USD per unit)
min_context_windowNoMinimum context window in tokens
min_parameter_countNoMinimum parameter count, e.g. '7B', '70B'
limitNoMaximum results to return (default 20)
offsetNoOffset for pagination
_atom_api_keyNoYour ATOM API key for full access. Omit for free tier (redacted data).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context not in annotations: it discloses the tiered return behavior ('Free tier shows count + price range; paid tier shows full details') and clarifies that pricing data is included in results. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structure is optimally front-loaded: scope (vendors/SKUs), capabilities (query dimensions), return value explanation, tier limitations, then concrete examples. Every sentence serves a distinct purpose. No redundant text. The arrow notation in examples ('→') efficiently maps natural language to parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description compensates adequately by explaining what gets returned ('matching models with pricing') and detailing the tier-based response differences (free vs. paid). Given the 12-parameter complexity and 100% schema coverage, this is sufficient for an agent to invoke the tool, though slightly more detail on 'full details' content would warrant a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description elevates this by providing practical examples demonstrating parameter combinations (e.g., open_source=true with modality='Text'), which adds real-world usage context beyond the raw schema definitions. It also summarizes the filterable dimensions in natural language before listing examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search and filter') + specific resource ('AI inference models') + clear scope ('40+ vendors and 1,600+ SKUs'). It effectively distinguishes from siblings like 'get_model_detail' (retrieve specific record) and 'list_vendors' (list vendors not models) by emphasizing the search/filter capability across a broad catalog.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides three concrete, mapped examples showing query patterns ('Find open-source text models under $1/M tokens' → parameter mapping). This gives clear implicit guidance on when to use the tool. However, it lacks explicit 'when not to use' guidance (e.g., does not mention to use 'get_model_detail' when looking up a specific model by ID rather than searching).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/A7OM-AI/atom-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server