Integrations
Enables monitoring and analysis of Kubernetes metrics, including the ability to query container, pod, and other Kubernetes-specific metrics collected by VictoriaMetrics.
Supports Prometheus-compatible functionality including querying with PromQL, metric relabeling rules debugging, and integration with Prometheus configuration when used as a scraper for VictoriaMetrics.
Provides access to a VictoriaMetrics monitoring system, enabling querying metrics, exploring data, analyzing alerts/rules, viewing instance parameters, exploring data cardinality, analyzing queries, and debugging relabeling rules and retention configurations.
VictoriaMetrics MCP Server
The implementation of Model Context Protocol (MCP) server for VictoriaMetrics.
This provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. It can give you a comprehensive interface for monitoring, observability, and debugging tasks related to your VictoriaMetrics instances, enable advanced automation and interaction capabilities for engineers and tools.
Features
This MCP server allows you to use almost all read-only APIs of VictoriaMetrics, i.e. all functions available in VMUI:
- Querying metrics and exploring data (even drawing graphs if your client supports it)
- Listing and exporting available metrics, labels, labels values and entire series
- Analyzing your alerting and recording rules and alerts
- Showing parameters of your VictoriaMetrics instance
- Exploring cardinality of your data and metrics usage statistics
- Analyzing your queries
- Debugging your relabeling rules, downsampling and retention policy configurations
In addition, the MCP server contains embedded up-to-date documentation and is able to search it without online access.
More details about the exact available tools and prompts can be found in the Usage section.
You can combine functionality of tools, docs search in your prompts and invent great usage scenarios for your VictoriaMetrics instance. Just check the Dialog example section to see how it can work. And please note the fact that the quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
You can also combine the MCP server with other observability or doc search related MCP Servers and get even more powerful results.
Requirements
- VictoriaMetrics instance (single-node or cluster)
- Go 1.24 or higher (if you want to build from source)
Installation
Go
Source Code
Binaries
Just download the latest release from Releases page and put it to your PATH.
Docker
Coming soon...
Configuration
MCP Server for VictoriaMetrics is configured via environment variables:
Variable | Description | Required | Default | Allowed values |
---|---|---|---|---|
VM_INSTANCE_ENTRYPOINT | URL to VictoriaMetrics instance | Yes | - | - |
VM_INSTANCE_TYPE | Type of VictoriaMetrics instance | Yes | - | single , cluster |
VM_INSTANCE_BEARER_TOKEN | Authentication token for VictoriaMetrics API | No | - | - |
MCP_SERVER_MODE | Server operation mode | No | stdio | stdio , sse |
MCP_SSE_ADDR | Address for SSE server to listen on | No | localhost:8080 | - |
Сonfiguration examples
Setup in clients
Cursor
Go to: Settings
-> Cursor Settings
-> MCP
-> Add new global MCP server
and paste the following configuration into your Cursor ~/.cursor/mcp.json
file:
See Cursor MCP docs for more info.
Claude Desktop
Add this to your Claude Desktop claude_desktop_config.json
file (you can find it if open Settings
-> Developer
-> Edit config
):
See Claude Desktop MCP docs for more info.
Claude Code
Run the command:
See Claude Code MCP docs for more info.
Visual Studio Code
Add this to your VS Code MCP config file:
See VS Code MCP docs for more info.
Zed
Add the following to your Zed config file:
See Zed MCP docs for more info.
JetBrains IDEs
- Open
Settings
->Tools
->AI Assistant
->Model Context Protocol (MCP)
. - Click
Add (+)
- Select
As JSON
- Put the following to the input field:
Windsurf
Add the following to your Windsurf MCP config file.
See Windsurf MCP docs for more info.
Amazon Bedrock
Coming soon....
Using Docker instead of binary
Coming soon...
Usage
After installing and configuring the MCP server, you can start using it with your favorite MCP client.
You can start dialog with AI assistant from the phrase:
But it's not required, you can just start asking questions and the assistant will automatically use the tools and documentation to provide you with the best answers. Just take a look into Dialog example section for better understanding what you can do with it.
Toolset
MCP VictoriaMetrics provides numerous tools for interacting with your VictoriaMetrics instance.
Here's a list of available tools:
Tool | Description |
---|---|
query | Execute instant PromQL/MetricsQL queries |
query_range | Execute range PromQL/MetricsQL queries over a time period |
metrics | List available metrics |
labels | List available label names |
label_values | List values for a specific label |
series | List available time series |
export | Export raw time series data to JSON or CSV |
rules | View alerting and recording rules |
alerts | View current alerts (firing and pending) |
flags | View non-default flags of the VictoriaMetrics instance |
metric_statistics | Get metrics usage (in queries) statistics |
active_queries | View currently executing queries |
top_queries | View most frequent or slowest queries |
tsdb_status | View TSDB cardinality statistics |
tenants | List available tenants in multi-tenant cluster setup |
documentation | Search in embedded VictoriaMetrics documentation |
metric_relabel_debug | Debug Prometheus-compatible relabeling rules |
downsampling_filters_debug | Debug downsampling configuration |
retention_filters_debug | Debug retention filters configuration |
Prompts
The server includes pre-defined prompts for common tasks.
These are just examples at the moment, the prompt library will be added to in the future:
Prompt | Description |
---|---|
unused_metrics | Find metrics that are never queried and create relabel configs to stop ingesting them |
documentation | Search VictoriaMetrics documentation for specific topics |
rarely_used_metrics_with_high_cardinality | Identify high-cardinality metrics that are rarely queried |
Note that downsampling is an Enterprise feature in VictoriaMetrics. Your current license allows you to use it, as I can see the license file is configured.
Dialog example
This is just example of dialog with Assistant using MCP Server that can help to understand how it can work (but's it's real dialog with using the Victoria Metrics MCP server with Claude Desktop and configured to public playground instance):
You: What tenants are available in my VM?
Assistant In your VictoriaMetrics instance, the following tenants are available
0:0
(default tenant),1:0
,10:0
,100:0
You: From now on, please use only default tenant
Assistant: I understand. From now on, I'll use only the default tenant (0:0) for all operations with your VictoriaMetrics instance.
You: Which metrics do we have?
Assistant: There are many metrics available in your VictoriaMetrics instance. The list includes metrics from various systems and components, such as:
- System metrics (
node_
,machine_
, etc.), - Kubernetes metrics (
kube_
,container_
,apiserver_
, etc.), - VictoriaMetrics-specific metrics (
vm_
,vmagent_
,vmalert_
, etc.), - Go runtime metrics (
go_
), - Prometheus-related metrics (
scrape_
,promhttp_
), - Various application metrics (
argocd_
,alertmanager_
).
This is a comprehensive list of over 2000 metrics that are available for querying. Is there a specific type of metric you're interested in exploring further?
- System metrics (
You: Ok, let's work with kubernetes metrics from now on. Please show me which of these metrics never queried?
Assistant: Based on the results, I can see that there are two Kubernetes metrics that have never been queried (query count is 0):
kube_persistentvolumeclaim_deletion_timestamp
- This metric provides information about when PersistentVolumeClaims were marked for deletion.kube_pod_init_container_status_last_terminated_reason
- This metric shows the reason why init containers in pods were last terminated.
These metrics might be valuable for monitoring and troubleshooting purposes:
- The PVC deletion timestamp metric could help track deletion patterns or identify stuck PVCs.
- The init container termination reason metric could help identify common failure patterns in your initialization containers.
Would you like me to explain more about what these metrics track or suggest some useful queries to start monitoring them?
You: Yes, please. And also, can you create relabeling rules to stop ingesting these metrics?
Assistant: I'll create a relabel configuration to exclude these never-queried metrics from collection. This will help reduce storage and processing overhead. Here's the relabel configuration you can add to your VictoriaMetrics scrape configuration:
CopyThis configuration will:
- Look at each metric's name (the
__name__
label) - Check if it matches either of the specified metrics
- Drop the metrics that match, preventing them from being collected and stored
You can add this to your VictoriaMetrics configuration file or to the relevant scrape job configurations in your Prometheus configuration if you're using Prometheus as a scraper for VictoriaMetrics. If you want to test this relabel configuration before implementing it, you can use the metric relabel debug tool to see how it would affect your metrics.
- Look at each metric's name (the
You: Yes, please debug this relabel config on real time series to see how it would affect my metrics
Assistant: Perfect! The relabel config works as expected:
- The first two test cases show that both
kube_persistentvolumeclaim_deletion_timestamp
andkube_pod_init_container_status_last_terminated_reason
metrics are correctly dropped (resulting labels are empty{}
). - The third test case confirms that other metrics like
kube_pod_status_phase
are preserved and not affected by our drop rule.
You can confidently add this relabel configuration to your VictoriaMetrics or Prometheus configuration to stop collecting these unused metrics. This will save storage space and reduce processing overhead without affecting your current monitoring capabilities.
- The first two test cases show that both
You: But i'm using vmagent resource of VictoriaMetrics operator, could you please give me part of specification for my agent with this relabel config?
Assistant: You're right, since you're using the VictoriaMetrics Operator and the VMAgent custom resource, the configuration needs to be in the appropriate format for that CR. Here's how you can add the relabel configuration to your VMAgent specification:
CopyKey points:
- In the VMAgent spec, you use
inlineRelabelConfig
for inline relabeling rules - The field is called
sourceLabels
(notsource_labels
as in Prometheus config) - This is a global relabeling that will be applied to all metrics before they're stored
- In the VMAgent spec, you use
You: I decided not to remove those metrics, but just lower the sample rate for them using Downsampling. What downsampling parameters i have now?
Assistant: I can see that you currently have the following downsampling configuration:
CopyThis means:
- Data newer than 7 days is kept at its original resolution
- Data between 7-30 days old is downsampled to 1-minute resolution
- Data between 30-180 days old is downsampled to 5-minute resolution
- Data older than 180 days is downsampled to 30-minute resolution
Now, if you want to add specific downsampling rules for those rarely-queried metrics, you can use the downsampling filters debug tool to test how new rules would work. Let's test a configuration where we apply more aggressive downsampling to those specific metrics:
...getting series and using downsampling filter debugger...
Great! Now our test is working correctly. Based on the current configuration and our testing, here's a suggested downsampling configuration for your VMAgent CR that would apply more aggressive downsampling to those rarely-queried metrics:
CopyThis configuration:
- Applies 10-minute resolution downsampling for the rarely-queried metrics after just 1 day - Maintains your existing downsampling policy for all other metrics
The order of the downsampling rules is important - more specific rules should be listed first.
During this dialog, the assistant was using the following tools:
tenants
to get the list of available tenantsdocumentation
to get information about functionality and data formatsmetrics
to get the list of available metricsmetrics_staistics
to get the information about metrics usageseries
to get the time series for debuggingmetric_relabel_debug
to debug relabeling rulesflags
to get the information about instance parametersdownsampling_filters_debug
to debug downsampling configuration
But you can use any other tools and combine them in your own way.
Disclaimer
AI services and agents along with MCP servers like this cannot guarantee the accuracy, completeness and reliability of results. You should double check the results obtained with AI. The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
Contributing
Contributions to the MCP VictoriaMetrics project are welcome! Please feel free to submit issues, feature requests, or pull requests.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
mcp-victoriametrics
Related MCP Servers
- PythonMIT License
- PythonApache 2.0
- -security-license-qualityVictoriaMetrics-mcp-serverLast updated -1JavaScript
- TypeScriptMIT License