The Honeycomb MCP server enables LLMs to programmatically access and analyze Honeycomb observability data across multiple environments.
Capabilities:
Query datasets using various calculations (COUNT, AVG, P95, etc.), breakdowns, filters, and time-based analysis
List and retrieve metadata about datasets, columns, boards, markers, recipients, SLOs, and triggers
Analyze columns to understand data distribution and patterns
Monitor Service Level Objectives (SLOs)
View detailed information about triggers (alerts)
Generate deep links to specific traces in the Honeycomb UI
Provide OpenTelemetry instrumentation guidance
Optimize performance with TTL-based caching for non-query API calls
Required runtime environment for operating the MCP server, with version 16+ needed.
Package manager used for installation and building the MCP server.
Honeycomb MCP
A Model Context Protocol server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets across multiple environments.
Requirements
Node.js 18+
Honeycomb API key with full permissions:
Query access for analytics
Read access for SLOs and Triggers
Environment-level access for dataset operations
Honeycomb MCP is effectively a complete alternative interface to Honeycomb, and thus you need broad permissions for the API.
Honeycomb Enterprise Only
Currently, this is only available for Honeycomb Enterprise customers.
How it works
Today, this is a single server process that you must run on your own computer. It is not authenticated. All information uses STDIO between your client and the server.
Installation
The build artifact goes into the /build
folder.
Configuration
To use this MCP server, you need to provide Honeycomb API keys via environment variables in your MCP config.
For multiple environments:
Important: These environment variables must bet set in the env
block of your MCP config.
EU Configuration
EU customers must also set a HONEYCOMB_API_ENDPOINT
configuration, since the MCP defaults to the non-EU instance.
Caching Configuration
The MCP server implements caching for all non-query Honeycomb API calls to improve performance and reduce API usage. Caching can be configured using these environment variables:
Client compatibility
Honeycomb MCP has been tested with the following clients:
It will likely work with other clients.
Features
Query Honeycomb datasets across multiple environments
Run analytics queries with support for:
Multiple calculation types (COUNT, AVG, P95, etc.)
Breakdowns and filters
Time-based analysis
Monitor SLOs and their status (Enterprise only)
Analyze columns and data patterns
View and analyze Triggers
Access dataset metadata and schema information
Optimized performance with TTL-based caching for all non-query API calls
Resources
Access Honeycomb datasets using URIs in the format:
honeycomb://{environment}/{dataset}
For example:
honeycomb://production/api-requests
honeycomb://staging/backend-services
The resource response includes:
Dataset name
Column information (name, type, description)
Schema details
Tools
list_datasets
: List all datasets in an environment{ "environment": "production" }get_columns
: Get column information for a dataset{ "environment": "production", "dataset": "api-requests" }run_query
: Run analytics queries with rich options{ "environment": "production", "dataset": "api-requests", "calculations": [ { "op": "COUNT" }, { "op": "P95", "column": "duration_ms" } ], "breakdowns": ["service.name"], "time_range": 3600 }analyze_columns
: Analyzes specific columns in a dataset by running statistical queries and returning computed metrics.list_slos
: List all SLOs for a dataset{ "environment": "production", "dataset": "api-requests" }get_slo
: Get detailed SLO information{ "environment": "production", "dataset": "api-requests", "sloId": "abc123" }list_triggers
: List all triggers for a dataset{ "environment": "production", "dataset": "api-requests" }get_trigger
: Get detailed trigger information{ "environment": "production", "dataset": "api-requests", "triggerId": "xyz789" }get_trace_link
: Generate a deep link to a specific trace in the Honeycomb UIget_instrumentation_help
: Provides OpenTelemetry instrumentation guidance{ "language": "python", "filepath": "app/services/payment_processor.py" }
Example Queries with Claude
Ask Claude things like:
"What datasets are available in the production environment?"
"Show me the P95 latency for the API service over the last hour"
"What's the error rate broken down by service name?"
"Are there any SLOs close to breaching their budget?"
"Show me all active triggers in the staging environment"
"What columns are available in the production API dataset?"
Optimized Tool Responses
All tool responses are optimized to reduce context window usage while maintaining essential information:
List datasets: Returns only name, slug, and description
Get columns: Returns streamlined column information focusing on name, type, and description
Run query:
Includes actual results and necessary metadata
Adds automatically calculated summary statistics
Only includes series data for heatmap queries
Omits verbose metadata, links and execution details
Analyze column:
Returns top values, counts, and key statistics
Automatically calculates numeric metrics when appropriate
SLO information: Streamlined to key status indicators and performance metrics
Trigger information: Focused on trigger status, conditions, and notification targets
This optimization ensures that responses are concise but complete, allowing LLMs to process more data within context limitations.
Query Specification for run_query
The run_query
tool supports a comprehensive query specification:
calculations: Array of operations to perform
Supported operations: COUNT, CONCURRENCY, COUNT_DISTINCT, HEATMAP, SUM, AVG, MAX, MIN, P001, P01, P05, P10, P25, P50, P75, P90, P95, P99, P999, RATE_AVG, RATE_SUM, RATE_MAX
Some operations like COUNT and CONCURRENCY don't require a column
Example:
{"op": "HEATMAP", "column": "duration_ms"}
filters: Array of filter conditions
Supported operators: =, !=, >, >=, <, <=, starts-with, does-not-start-with, exists, does-not-exist, contains, does-not-contain, in, not-in
Example:
{"column": "error", "op": "=", "value": true}
filter_combination: "AND" or "OR" (default is "AND")
breakdowns: Array of columns to group results by
Example:
["service.name", "http.status_code"]
orders: Array specifying how to sort results
Must reference columns from breakdowns or calculations
HEATMAP operation cannot be used in orders
Example:
{"op": "COUNT", "order": "descending"}
time_range: Relative time range in seconds (e.g., 3600 for last hour)
Can be combined with either start_time or end_time but not both
start_time and end_time: UNIX timestamps for absolute time ranges
having: Filter results based on calculation values
Example:
{"calculate_op": "COUNT", "op": ">", "value": 100}
Example Queries
Here are some real-world example queries:
Find Slow API Calls
Distribution of DB Calls (Last Week)
Exception Count by Exception and Caller
Development
License
MIT
local-only server
The server can only run on the client's local machine because it depends on local resources.
Tools
Server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets.
Related MCP Servers
- AsecurityAlicenseAqualityThis server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.Last updated -169,123MIT License
- -securityAlicense-qualityProvides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools.Last updated -3329MIT License
- MIT License
- -securityFlicense-qualityA server that enables LLMs to detect anomalies in sensor data by providing tools for data retrieval, analysis, visualization, and corrective action execution.Last updated -
Appeared in Searches
- Natural Language to SQL Conversion and Executing MySQL Queries to Retrieve Data
- Tools for generating charts based on data
- A tool for analyzing user feedback data to support app product development
- A generic RESTful API request tool for testing and integration
- Analyser les données des marchés boursiers pour améliorer les stratégies d'investissement