Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Artificial Analysis MCP ServerList the top 5 models with the highest intelligence index"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Artificial Analysis MCP Server
An MCP (Model Context Protocol) server that provides LLM model pricing, speed metrics, and benchmark scores from Artificial Analysis.
Features
Get real-time pricing for 300+ LLM models (input/output/blended rates)
Compare speed metrics (tokens/sec, time to first token)
Access benchmark scores (Intelligence Index, Coding Index, MMLU-Pro, GPQA, and more)
Filter by provider (OpenAI, Anthropic, Google, etc.)
Sort by any metric
Installation
Claude Code
Or install from GitHub:
Manual Configuration
Add to your Claude settings (~/.claude/settings.json):
Configuration
Environment Variable | Required | Description |
| Yes | Your Artificial Analysis API key |
Get your API key at artificialanalysis.ai.
Tools
list_models
List all available LLM models with optional filtering and sorting.
Parameters:
Name | Type | Required | Description |
| string | No | Filter by model creator (e.g., "OpenAI", "Anthropic") |
| string | No | Sort field (see below) |
| string | No | "asc" or "desc" (default: "desc") |
| number | No | Maximum results to return |
Sort fields: price_input, price_output, price_blended, speed, ttft, intelligence_index, coding_index, math_index, mmlu_pro, gpqa, release_date
Example usage:
"List the top 5 fastest models"
"Show me Anthropic models sorted by price"
"What are the cheapest models with high intelligence scores?"
get_model
Get detailed information about a specific model.
Parameters:
Name | Type | Required | Description |
| string | Yes | Model name or slug (e.g., "gpt-4o", "claude-4-5-sonnet") |
Returns: Complete model details including pricing, speed metrics, and all benchmark scores.
Example usage:
"Get pricing for GPT-4o"
"What are Claude 4.5 Sonnet's benchmark scores?"
Model Data
Each model includes:
Pricing: Input/output/blended rates per 1M tokens (USD)
Speed: Output tokens per second, time to first token
Benchmarks: Intelligence Index, Coding Index, Math Index, MMLU-Pro, GPQA, LiveCodeBench, and more
Development
License
MIT