Provides access to DataDog's observability platform, enabling AI assistants to monitor infrastructure, manage events and alerts, query dashboards and metrics, analyze logs and traces, and automate operations like managing monitors and downtimes.
DataDog MCP Server
A Model Context Protocol (MCP) server that provides AI assistants with direct access to DataDog's observability platform through a standardized interface.
🎯 Purpose
This server bridges the gap between Large Language Models (LLMs) and DataDog's comprehensive observability platform, enabling AI assistants to:
- Monitor Infrastructure: Query dashboards, metrics, and host status
- Manage Events: Create and retrieve events for incident tracking
- Analyze Data: Access logs, traces, and performance metrics
- Automate Operations: Interact with monitors, downtimes, and alerts
🔧 What is MCP?
The Model Context Protocol (MCP) is a standardized way for AI assistants to interact with external tools and data sources. Instead of each AI system building custom integrations, MCP provides a common interface that allows LLMs to:
- Execute tools with structured inputs and outputs
- Access real-time data from external systems
- Maintain context across multiple tool calls
- Provide consistent, reliable integrations
📊 DataDog Platform
DataDog is a leading observability platform that provides:
- Infrastructure Monitoring: Track server performance, resource usage, and health
- Application Performance Monitoring (APM): Monitor application performance and user experience
- Log Management: Centralized logging with powerful search and analysis
- Real User Monitoring (RUM): Track user interactions and frontend performance
- Security Monitoring: Detect threats and vulnerabilities across your infrastructure
🚀 Quick Start
- Build the server:
- Configure DataDog API:
- Generate MCP configuration:
- Run the server:
📚 Documentation
- Available Tools - Complete list of implementable DataDog tools
- Test Documentation - Test coverage and implementation details
- OpenAPI Splitting - How to split large OpenAPI specifications
- Spectral Linting - OpenAPI specification validation and linting
🛠️ Available Tools
Currently implemented tools include:
- Dashboard Management (v1):
v1_list_dashboards
,v1_get_dashboard
- Event Management (v1):
v1_list_events
,v1_create_event
- Connection Testing (v1):
v1_test_connection
- Monitor Management (v1): (Coming soon)
- Metrics & Logs (v1): (Coming soon)
All tools are prefixed with their API version (e.g., v1_
, v2_
) for clear segregation and future v2 API support.
See docs/tools.md for the complete list and implementation status.
🔧 Development
OpenAPI Management
The project includes comprehensive tools for managing OpenAPI specifications:
- Split Specifications: Break down large OpenAPI files into smaller, manageable pieces
- Spectral Linting: Validate OpenAPI specifications with custom rules and best practices
- Code Generation: Generate Go client code from OpenAPI specifications
- Version Support: Separate handling for DataDog API v1 and v2
See OpenAPI Splitting Guide and Spectral Linting Guide for detailed usage.
📚 Resources
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables AI assistants to interact with DataDog's observability platform through a standardized interface. Supports monitoring infrastructure, managing events, analyzing logs and metrics, and automating operations like alerts and downtimes.