Enables real-time observability data access from Dynatrace, allowing users to fetch problem details, security vulnerabilities, execute DQL queries, retrieve logs and metrics, analyze Kubernetes cluster events, and set up notifications via Slack and Dynatrace AutomationEngine.
Provides tools for monitoring Kubernetes clusters, retrieving problem details from services, and analyzing cluster events to troubleshoot deployment issues.
Allows sending notification messages to Slack channels via a Slack Connector, supporting features like alerting on availability problems and security vulnerabilities.
Dynatrace MCP Server
This local MCP server allows interaction with the Dynatrace observability platform. Bring real-time observability data directly into your development workflow.
Use cases
- Real-time observability - Fetch production-level data for early detection and proactive monitoring
- Contextual debugging - Fix issues with full context from monitored exceptions, logs, and anomalies
- Security insights - Get detailed vulnerability analysis and security problem tracking
- Natural language queries - Use AI-powered DQL generation and explanation
- Multi-phase incident investigation - Systematic 4-phase approach with automated impact assessment
- Advanced transaction analysis - Precise root cause identification with file/line-level accuracy
- Cross-data source correlation - Connect problems → spans → logs with trace ID correlation
- DevOps automation - Deployment health gates with automated promotion/rollback logic
- Security compliance monitoring - Multi-cloud compliance assessment with evidence-based investigation
Capabilities
- List and get problem details from your services (for example Kubernetes)
- List and get security problems / vulnerability details
- Execute DQL (Dynatrace Query Language) and retrieve logs, events, spans and metrics
- Send Slack messages (via Slack Connector)
- Set up notification Workflow (via Dynatrace AutomationEngine)
- Get more information about a monitored entity
- Get Ownership of an entity
Costs
Important: While this local MCP server is provided for free, using certain capabilities to access data in Dynatrace Grail may incur additional costs based
on your Dynatrace consumption model. This affects execute_dql
tool and other capabilities that query Dynatrace Grail storage, and costs
depend on the volume (GB scanned).
Before using this MCP server extensively, please:
- Review your current Dynatrace consumption model and pricing
- Understand the cost implications of the specific data you plan to query (logs, events, metrics) - see Dynatrace Pricing and Rate Card
- Start with smaller timeframes (e.g., 12h-24h) and make use of buckets to reduce the cost impact
To understand costs that occured:
Execute the following DQL statement in a notebook to see how much bytes have been queried from Grail (Logs, Events, etc...):
AI-Powered Assistance (Preview)
- Natural Language to DQL - Convert plain English queries to Dynatrace Query Language
- DQL Explanation - Get plain English explanations of complex DQL queries
- AI Chat Assistant - Get contextual help and guidance for Dynatrace questions
- Feedback System - Provide feedback to improve AI responses over time
Note: While Davis CoPilot AI is generally available (GA), the Davis CoPilot APIs are currently in preview. For more information, visit the Davis CoPilot Preview Community.
🎯 AI-Powered Observability Workshop Rules
Enhance your AI assistant with comprehensive Dynatrace observability analysis capabilities through our streamlined workshop rules. These rules provide hierarchical workflows for security, compliance, incident response, and distributed systems investigation.
🚀 Quick Setup for AI Assistants
Copy the comprehensive rule files from the rules/
directory to your AI assistant's rules directory:
IDE-Specific Locations:
- Amazon Q:
.amazonq/rules/
(project) or~/.aws/amazonq/rules/
(global) - Cursor:
.cursor/rules/
(project) or via Settings → Rules (global) - Windsurf:
.windsurfrules/
(project) or via Customizations → Rules (global) - Cline:
.clinerules/
(project) or~/Documents/Cline/Rules/
(global) - GitHub Copilot:
.github/copilot-instructions.md
(project only)
Then initialize the agent in your AI chat:
🏗️ Enhanced Analysis Capabilities
The workshop rules unlock advanced observability analysis modes:
🚨 Incident Response & Problem Investigation
- 4-phase structured investigation workflow (Detection → Impact → Root Cause → Resolution)
- Cross-data source correlation (problems → logs → spans → metrics)
- Kubernetes-aware incident analysis with namespace and pod context
- User impact assessment with Davis AI integration
📊 Comprehensive Data Investigation
- Unified log-service-process analysis in single workflow
- Business logic error detection patterns
- Deployment correlation analysis with ArgoCD/GitOps integration
- Golden signals monitoring (Rate, Errors, Duration, Saturation)
🔗 Advanced Transaction Analysis
- Precise root cause identification with file/line numbers
- Exception stack trace analysis with business context
- Multi-service cascade failure analysis
- Performance impact correlation across distributed systems
🛡️ Enhanced Security & Compliance
- Latest-scan analysis prevents outdated data aggregation
- Multi-cloud compliance (AWS, Azure, GCP, Kubernetes)
- Evidence-based investigation with detailed remediation paths
- Risk-based scoring with team-specific guidance
⚡ DevOps Automation & SRE
- Deployment health gates with automated promotion/rollback
- SLO/SLI automation with error budget calculations
- Infrastructure as Code remediation with auto-generated templates
- Alert optimization workflows with pattern recognition
📁 Hierarchical Rule Architecture
The rules are organized in a context-window optimized structure:
Key Architectural Benefits:
- All files under 6,500 tokens - Compatible with most LLM context limits
- Hierarchical organization - Clear entry points and specialized guides
- Eliminated circular references - No more confusing cross-referencing webs
- DQL-first approach - Prefer flexible queries over rigid MCP calls
For detailed information about the workshop rules, see the Rules README.
Quickstart
You can add this MCP server (using STDIO) to your MCP Client like VS Code, Claude, Cursor, Amazon Q Developer CLI, Windsurf Github Copilot via the package @dynatrace-oss/dynatrace-mcp-server
.
We recommend to always set it up for your current workspace instead of using it globally.
VS Code
Please note: In this config, the ${workspaceFolder}
variable is used.
This only works if the config is stored in the current workspaces, e.g., <your-repo>/.vscode/mcp.json
. Alternatively, this can also be stored in user-settings, and you can define env
as follows:
Claude Desktop
Amazon Q Developer CLI
The Amazon Q Developer CLI provides an interactive chat experience directly in your terminal. You can ask questions, get help with AWS services, troubleshoot issues, and generate code snippets without leaving your command line environment.
This configuration should be stored in <your-repo>/.amazonq/mcp.json
.
HTTP Server Mode (Alternative)
For scenarios where you need to run the MCP server as an HTTP service instead of using stdio (e.g., for stateful sessions, load balancing, or integration with web clients), you can use the HTTP server mode:
Running as HTTP server:
Configuration for MCP clients that support HTTP transport:
Configuration for MCP clients that support HTTP transport:
Rule File
For efficient result retrieval from Dynatrace, please consider creating a rule file (e.g., .github/copilot-instructions.md, .amazonq/rules/), instructing coding agents on how to get more details for your component/app/service. Here is an example for easytrade, please adapt the names and filters to fit your use-cases and components:
Environment Variables
You can set up authentication via Platform Tokens (recommended) or OAuth Client via the following environment variables:
DT_ENVIRONMENT
(string, e.g., https://abc12345.apps.dynatrace.com) - URL to your Dynatrace Platform (do not use Dynatrace classic URLs likeabc12345.live.dynatrace.com
)DT_PLATFORM_TOKEN
(string, e.g.,dt0s16.SAMPLE.abcd1234
) - Recommended: Dynatrace Platform TokenOAUTH_CLIENT_ID
(string, e.g.,dt0s02.SAMPLE
) - Alternative: Dynatrace OAuth Client ID (for advanced use cases)OAUTH_CLIENT_SECRET
(string, e.g.,dt0s02.SAMPLE.abcd1234
) - Alternative: Dynatrace OAuth Client Secret (for advanced use cases)
Platform Tokens are recommended for most use cases as they provide a simpler authentication flow. OAuth Clients should only be used when specific OAuth features are required.
For more information, please have a look at the documentation about creating a Platform Token in Dynatrace, as well as creating an OAuth Client in Dynatrace for advanced scenarios.
In addition, depending on the features you use, the following variables can be configured:
SLACK_CONNECTION_ID
(string) - connection ID of a Slack Connection
Scopes for Authentication
Depending on the features you are using, the following scopes are needed:
Available for both Platform Tokens and OAuth Clients:
app-engine:apps:run
- needed for almost all toolsapp-engine:functions:run
- needed for for almost all toolsenvironment-api:entities:read
- for retrieving ownership details from monitored entities (currently not available for Platform Tokens)automation:workflows:read
- read Workflowsautomation:workflows:write
- create and update Workflowsautomation:workflows:run
- run Workflowsstorage:buckets:read
- needed forexecute_dql
tool to read all system data stored on Grailstorage:logs:read
- needed forexecute_dql
tool to read logs for reliability guardian validationsstorage:metrics:read
- needed forexecute_dql
tool to read metrics for reliability guardian validationsstorage:bizevents:read
- needed forexecute_dql
tool to read bizevents for reliability guardian validationsstorage:spans:read
- needed forexecute_dql
tool to read spans from Grailstorage:entities:read
- needed forexecute_dql
tool to read Entities from Grailstorage:events:read
- needed forexecute_dql
tool to read Events from Grailstorage:security.events:read
- needed forexecute_dql
tool to read Security Events from Grailstorage:system:read
- needed forexecute_dql
tool to read System Data from Grailstorage:user.events:read
- needed forexecute_dql
tool to read User events from Grailstorage:user.sessions:read
- needed forexecute_dql
tool to read User sessions from Graildavis-copilot:conversations:execute
- execute conversational skill (chat with Copilot)davis-copilot:nl2dql:execute
- execute Davis Copilot Natural Language (NL) to DQL skilldavis-copilot:dql2nl:execute
- execute DQL to Natural Language (NL) skillsettings:objects:read
- needed for reading ownership information and Guardians (SRG) from settingsNote: Please ensure thatsettings:objects:read
is used, and not the similarly named scopeapp-settings:objects:read
.
Important: Some features requiring environment-api:entities:read
will only work with OAuth Clients. For most use cases, Platform Tokens provide all necessary functionality.
✨ Example prompts ✨
Use these example prompts as a starting point. Just copy them into your IDE or agent setup, adapt them to your services/stack/architecture, and extend them as needed. They're here to help you imagine how real-time observability and automation work together in the MCP context in your IDE.
Basic Queries & AI Assistance
Write a DQL query from natural language:
Explain a DQL query:
Chat with Davis CoPilot:
Advanced Incident Investigation
Multi-phase incident response:
Cross-service failure analysis:
Security & Compliance Analysis
Latest-scan vulnerability assessment:
Multi-cloud compliance monitoring:
DevOps & SRE Automation
Deployment health gate analysis:
Infrastructure as Code remediation:
Deep Transaction Analysis
Business logic error investigation:
Performance correlation analysis:
Traditional Use Cases (Enhanced)
Find open vulnerabilities on production, setup alert:
Debug intermittent 503 errors:
Correlate memory issue with logs:
Trace request flow analysis:
Analyze Kubernetes cluster events:
Troubleshooting
Authentication Issues
In most cases, authentication issues are related to missing scopes or invalid tokens. Please ensure that you have added all required scopes as listed above.
For Platform Tokens:
- Verify your Platform Token has all the necessary scopes listed in the "Scopes for Authentication" section
- Ensure your token is valid and not expired
- Check that your user has the required permissions in your Dynatrace Environment
For OAuth Clients: In case of OAuth-related problems, you can troubleshoot SSO/OAuth issues based on our Dynatrace Developer Documentation.
It is recommended to test access with the following API (which requires minimal scopes app-engine:apps:run
and app-engine:functions:run
):
- Use OAuth Client ID and Secret to retrieve a Bearer Token (only valid for a couple of minutes):
- Use
access_token
from the response of the above call as the bearer-token in the next call:
- You should retrieve a result like this:
Problem accessing data on Grail
Grail has a dedicated section about permissions in the Dynatrace Docs. Please refer to https://docs.dynatrace.com/docs/discover-dynatrace/platform/grail/data-model/assign-permissions-in-grail for more details.
Development
For local development purposes, you can use VSCode and GitHub Copilot.
First, enable Copilot for your Workspace .vscode/settings.json
:
and make sure that you are using Agent Mode in Copilot.
Second, add the MCP to .vscode/mcp.json
:
Third, create a .env
file in this repository (you can copy from .env.template
) and configure environment variables as described above.
Finally, make changes to your code and compile it with npm run build
or just run npm run watch
and it auto-compiles.
Releasing
When you are preparing for a release, you can use GitHub Copilot to guide you through the preparations.
In Visual Studio Code, you can use /release
in the chat with Copilot in Agent Mode, which will execute release.prompt.md.
You may include additional information such as the version number. If not specified, you will be asked.
This will
- prepare the changelog,
- update the version number in package.json,
- commit the changes.
Notes
This product is not officially supported by Dynatrace. Please contact us via GitHub Issues if you have feature requests, questions, or need help.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A remote MCP server that enables real-time interaction with the Dynatrace observability platform, bringing production-level monitoring data directly into development workflows.
Related MCP Servers
- -securityFlicense-qualityThis is an MCP server that facilitates building tools for interacting with various APIs and workflows, supporting Python-based development with potential for customizable prompts and user configurations.Last updated -
- -securityFlicense-qualityAn MCP server that connects to Sentry.io or self-hosted Sentry instances to retrieve and analyze error reports, stack traces, and debugging information.Last updated -1
- AsecurityFlicenseAqualityA FastMCP-based tool for monitoring server statistics that retrieves CPU, memory, and uptime information from both local and remote servers via SSH.Last updated -20
- -securityFlicense-qualityAn MCP server that helps users track and manage fork parity with upstream repositories by detecting changes, analyzing commits, and managing integration status.Last updated -31