Skip to main content
Glama
atriumn
by atriumn

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
get_model_detailsA

Look up pricing, context window, and capabilities for an LLM model. Uses fuzzy matching so you don't need the exact model key.

calculate_estimateA

Estimate the cost for a given number of input and output tokens on a specific model. Supports optional cached_tokens for prompt caching discounts.

compare_modelsA

Filter and compare models by provider, minimum context window, or mode. Returns top 5 most cost-effective matches.

refresh_pricesA

Force a re-fetch of pricing data from the LiteLLM registry. Use this if you suspect the cached data is stale.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/atriumn/tokencost-dev'

If you have feedback or need assistance with the MCP directory API, please join our Discord server