hny-mcp
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Integrations
Required runtime environment for operating the MCP server, with version 16+ needed.
Package manager used for installation and building the MCP server.
Honeycomb MCP Server
A Model Context Protocol server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets across multiple environments.
Honeycomb Enterprise Only
Currently, this is only available for Honeycomb Enterprise customers.
Installation
The build artifact goes into the /build
folder.
Honeycomb Configuration
Create a configuration file at .mcp-honeycomb.json
in one of these locations:
- Current directory (recommended)
- Home directory (
~/
on macos, for example) - Custom location specified by
MCP_HONEYCOMB_CONFIG
environment variable
Example configuration:
This configuration is required to use the Honeycomb MCP Server.
MCP Configuration
You'll need to run node
on the location of the build artifact and specify the location of your Honeycomb MCP config.
You can omit the env
section if you have a more central installation, but it's recommended to have a config file per codebase.
The above configuration has been tested with the following clients:
It will likely work with other clients.
Features
- Query Honeycomb datasets across multiple environments
- Run analytics queries with support for:
- Multiple calculation types (COUNT, AVG, P95, etc.)
- Breakdowns and filters
- Time-based analysis
- Monitor SLOs and their status (Enterprise only)
- Analyze columns and data patterns
- View and analyze Triggers
- Access dataset metadata and schema information
Resources
Access Honeycomb datasets using URIs in the format:
honeycomb://{environment}/{dataset}
For example:
honeycomb://production/api-requests
honeycomb://staging/backend-services
The resource response includes:
- Dataset name
- Column information (name, type, description)
- Schema details
Tools
list_datasets
: List all datasets in an environmentCopyget_columns
: Get column information for a datasetCopyrun_query
: Run analytics queries with rich optionsCopyanalyze_column
: Get statistical analysis of a columnCopylist_slos
: List all SLOs for a datasetCopyget_slo
: Get detailed SLO informationCopylist_triggers
: List all triggers for a datasetCopyget_trigger
: Get detailed trigger informationCopy
Example Queries with Claude
Ask Claude things like:
- "What datasets are available in the production environment?"
- "Show me the P95 latency for the API service over the last hour"
- "What's the error rate broken down by service name?"
- "Are there any SLOs close to breaching their budget?"
- "Show me all active triggers in the staging environment"
- "What columns are available in the production API dataset?"
Optimized Tool Responses
All tool responses are optimized to reduce context window usage while maintaining essential information:
- List datasets: Returns only name, slug, and description
- Get columns: Returns streamlined column information focusing on name, type, and description
- Run query:
- Includes actual results and necessary metadata
- Adds automatically calculated summary statistics
- Only includes series data for heatmap queries
- Omits verbose metadata, links and execution details
- Analyze column:
- Returns top values, counts, and key statistics
- Automatically calculates numeric metrics when appropriate
- SLO information: Streamlined to key status indicators and performance metrics
- Trigger information: Focused on trigger status, conditions, and notification targets
This optimization ensures that responses are concise but complete, allowing LLMs to process more data within context limitations.
Query Specification for run_query
The run_query
tool supports a comprehensive query specification:
- calculations: Array of operations to perform
- Supported operations: COUNT, CONCURRENCY, COUNT_DISTINCT, HEATMAP, SUM, AVG, MAX, MIN, P001, P01, P05, P10, P25, P50, P75, P90, P95, P99, P999, RATE_AVG, RATE_SUM, RATE_MAX
- Some operations like COUNT and CONCURRENCY don't require a column
- Example:
{"op": "HEATMAP", "column": "duration_ms"}
- filters: Array of filter conditions
- Supported operators: =, !=, >, >=, <, <=, starts-with, does-not-start-with, exists, does-not-exist, contains, does-not-contain, in, not-in
- Example:
{"column": "error", "op": "=", "value": true}
- filter_combination: "AND" or "OR" (default is "AND")
- breakdowns: Array of columns to group results by
- Example:
["service.name", "http.status_code"]
- Example:
- orders: Array specifying how to sort results
- Must reference columns from breakdowns or calculations
- HEATMAP operation cannot be used in orders
- Example:
{"op": "COUNT", "order": "descending"}
- time_range: Relative time range in seconds (e.g., 3600 for last hour)
- Can be combined with either start_time or end_time but not both
- start_time and end_time: UNIX timestamps for absolute time ranges
- having: Filter results based on calculation values
- Example:
{"calculate_op": "COUNT", "op": ">", "value": 100}
- Example:
Example Queries
Here are some real-world example queries:
Find Slow API Calls
Distribution of DB Calls (Last Week)
Exception Count by Exception and Caller
Development
Requirements
- Node.js 16+
- Honeycomb API keys with appropriate permissions:
- Query access for analytics
- Read access for SLOs and Triggers
- Environment-level access for dataset operations
License
MIT
This server cannot be installed
Server for interacting with Honeycomb observability data. This server enables LLMs like Claude to directly analyze and query your Honeycomb datasets.