Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| tokencost_get_model_pricing | Get pricing details for a specific LLM model. Args:
Returns: Model pricing details including input/output costs per 1M tokens, context window, and max output. Returns an error message if the model is not found, with suggestions for similar models. Examples:
|
| tokencost_compare_models | Compare pricing across multiple LLM models side by side. Args:
Returns: Side-by-side comparison table with input/output costs, context windows, and relative cost differences. Examples:
|
| tokencost_estimate_cost | Calculate the cost for a specific number of input and output tokens with a given model. Args:
Returns: Cost breakdown with input cost, output cost, and total cost in USD. Examples:
|
| tokencost_find_cheapest | Find the cheapest LLM models, optionally filtered by provider or minimum context window. Args:
Returns: Ranked list of cheapest models with pricing details. Examples:
|
| tokencost_list_models | List all available LLM models with pricing data, optionally filtered by provider. Args:
Returns: List of all models with IDs, names, and providers. Use model IDs with other tools. Examples:
|
| tokencost_list_providers | List all LLM providers with model counts and pricing ranges. Returns: All providers with the number of models and pricing range for each. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |