README.md•3.94 kB
# DataDog MCP Server
A Model Context Protocol (MCP) server that provides AI assistants with direct access to DataDog's observability platform through a standardized interface.
## 🎯 Purpose
This server bridges the gap between Large Language Models (LLMs) and DataDog's comprehensive observability platform, enabling AI assistants to:
- **Monitor Infrastructure**: Query dashboards, metrics, and host status
- **Manage Events**: Create and retrieve events for incident tracking
- **Analyze Data**: Access logs, traces, and performance metrics
- **Automate Operations**: Interact with monitors, downtimes, and alerts
## 🔧 What is MCP?
The **Model Context Protocol (MCP)** is a standardized way for AI assistants to interact with external tools and data sources. Instead of each AI system building custom integrations, MCP provides a common interface that allows LLMs to:
- Execute tools with structured inputs and outputs
- Access real-time data from external systems
- Maintain context across multiple tool calls
- Provide consistent, reliable integrations
## 📊 DataDog Platform
DataDog is a leading observability platform that provides:
- **Infrastructure Monitoring**: Track server performance, resource usage, and health
- **Application Performance Monitoring (APM)**: Monitor application performance and user experience
- **Log Management**: Centralized logging with powerful search and analysis
- **Real User Monitoring (RUM)**: Track user interactions and frontend performance
- **Security Monitoring**: Detect threats and vulnerabilities across your infrastructure
## 🚀 Quick Start
1. **Build the server**:
```bash
make build
```
2. **Configure DataDog API**:
```bash
export DD_API_KEY="your-datadog-api-key"
export DATADOG_APP_KEY="your-datadog-app-key" # Optional
export DATADOG_SITE="datadoghq.eu" # or datadoghq.com
```
3. **Generate MCP configuration**:
```bash
make create-mcp-config
```
4. **Run the server**:
```bash
./build/datadog-mcp-server
```
## 📚 Documentation
- **[Available Tools](docs/tools.md)** - Complete list of implementable DataDog tools
- **[Test Documentation](docs/tests.md)** - Test coverage and implementation details
- **[OpenAPI Splitting](docs/openapi-splitting.md)** - How to split large OpenAPI specifications
- **[Spectral Linting](docs/spectral-linting.md)** - OpenAPI specification validation and linting
## 🛠️ Available Tools
Currently implemented tools include:
- **Dashboard Management (v1)**: `v1_list_dashboards`, `v1_get_dashboard`
- **Event Management (v1)**: `v1_list_events`, `v1_create_event`
- **Connection Testing (v1)**: `v1_test_connection`
- **Monitor Management (v1)**: (Coming soon)
- **Metrics & Logs (v1)**: (Coming soon)
All tools are prefixed with their API version (e.g., `v1_`, `v2_`) for clear segregation and future v2 API support.
See [docs/tools.md](docs/tools.md) for the complete list and implementation status.
## 🔧 Development
```bash
# Install development tools
make install-dev-tools
# Run tests
make test
# Generate API client
make generate
# Split OpenAPI specifications
make split
# Lint OpenAPI specifications
make lint-openapi
# Build and test
make build
```
### OpenAPI Management
The project includes comprehensive tools for managing OpenAPI specifications:
- **Split Specifications**: Break down large OpenAPI files into smaller, manageable pieces
- **Spectral Linting**: Validate OpenAPI specifications with custom rules and best practices
- **Code Generation**: Generate Go client code from OpenAPI specifications
- **Version Support**: Separate handling for DataDog API v1 and v2
See [OpenAPI Splitting Guide](docs/openapi-splitting.md) and [Spectral Linting Guide](docs/spectral-linting.md) for detailed usage.
## 📚 Resources
- [Model Context Protocol Introduction (Stytch Blog)](https://stytch.com/blog/model-context-protocol-introduction/)