The Doris MCP Server enables interaction with Apache Doris databases through a set of predefined tools:
Execute SQL Queries: Run SQL commands with customizable options for database selection, row limits, and timeouts
Retrieve Database Metadata: Get lists of databases, tables, and detailed table structures
Access Table Information: Fetch table and column comments, plus index details
Query Audit Logs: Retrieve recent audit records with configurable time frames and record limits
Flexible Communication: Supports multiple communication modes, including Server-Sent Events (SSE) and Streamable HTTP
Customizable Configuration: Configure database connections and server behavior via environment variables
Provides tools for interacting with Apache Doris databases, enabling database metadata retrieval, SQL query execution, schema exploration, and audit log retrieval through a standardized Model Control Panel interface.
Uses .ENV files for flexible configuration of database connections, server settings, logging preferences, and other environment variables.
Implemented using FastAPI to provide both SSE and HTTP streaming endpoints for clients to interact with the MCP protocol, supporting tool calls and prompt interactions.
Built with Python (3.12+) as the underlying language, providing the runtime environment for the MCP server implementation.
Doris MCP Server
Doris MCP (Model Context Protocol) Server is a backend service built with Python and FastAPI. It implements the MCP, allowing clients to interact with it through defined "Tools". It's primarily designed to connect to Apache Doris databases, potentially leveraging Large Language Models (LLMs) for tasks like converting natural language queries to SQL (NL2SQL), executing queries, and performing metadata management and analysis.
🚀 What's New in v0.6.0
- 🔐 Enterprise Authentication System: Revolutionary token-bound database configuration with comprehensive Token, JWT, and OAuth authentication support, enabling secure multi-tenant access with granular control switches and enterprise-grade security defaults
- ⚡ Immediate Database Validation: Real-time database configuration validation at connection time, eliminating query-time blocking and providing instant feedback for invalid configurations - achieving 100% elimination of late-stage connection failures
- 🔄 Hot Reload Configuration Management: Zero-downtime configuration updates with intelligent hot reloading of tokens.json, automatic token revalidation, and comprehensive error handling with rollback mechanisms
- 🏗️ Advanced Connection Architecture: Session caching and connection pool optimization with 60% reduction in connection overhead, intelligent pool recreation, and automatic resource management
- 🌐 Multi-Worker Scalability: True horizontal scaling with stateless multi-worker architecture, efficient load distribution, and enterprise-grade concurrent processing capabilities
- 🔒 Enhanced Security Framework: Comprehensive access control and SQL security validation with immediate validation, role-based permissions, and enhanced injection detection patterns
- 🛠️ Unified Configuration System: Streamlined configuration management with proper command-line precedence, Docker compatibility improvements, and cross-platform deployment support
- 📊 Token Management Dashboard: Complete token lifecycle management with creation, revocation, statistics, and comprehensive audit trails for enterprise token governance
- 🌐 Web-Based Management Interface: Secure localhost-only token administration with intuitive dashboard, database binding configuration, real-time operations, and enterprise-grade access controls
🚀 Major Milestone: v0.6.0 establishes the platform as a production-ready enterprise authentication and database management system with zero-downtime operations (hot reload + immediate validation + multi-worker scaling), advanced security controls, and comprehensive token-bound database configuration - representing a fundamental advancement in enterprise data platform capabilities.
What's Also Included from v0.5.1
- 🔥 Critical at_eof Connection Fix: Complete elimination of connection pool errors with intelligent health monitoring and self-healing recovery
- 🔧 Enterprise Logging System: Level-based file separation with automatic cleanup and millisecond precision timestamps
- 📊 Advanced Data Analytics Suite: 7 enterprise-grade data governance tools including quality analysis, lineage tracking, and performance monitoring
- 🏃♂️ High-Performance ADBC Integration: Apache Arrow Flight SQL support with 3-10x performance improvements for large datasets
- ⚙️ Enhanced Configuration Management: Complete ADBC configuration system with intelligent parameter validation
Core Features
- MCP Protocol Implementation: Provides standard MCP interfaces, supporting tool calls, resource management, and prompt interactions.
- Streamable HTTP Communication: Unified HTTP endpoint supporting both request/response and streaming communication for optimal performance and reliability.
- Stdio Communication: Standard input/output mode for direct integration with MCP clients like Cursor.
- Enterprise-Grade Architecture: Modular design with comprehensive functionality:
- Tools Manager: Centralized tool registration and routing with unified interfaces (
doris_mcp_server/tools/tools_manager.py
) - Enhanced Monitoring Tools Module: Advanced memory tracking, metrics collection, and flexible BE node discovery with modular, extensible design
- Query Information Tools: Enhanced SQL explain and profiling with configurable content truncation, file export for LLM attachments, and advanced query analytics
- Resources Manager: Resource management and metadata exposure (
doris_mcp_server/tools/resources_manager.py
) - Prompts Manager: Intelligent prompt templates for data analysis (
doris_mcp_server/tools/prompts_manager.py
)
- Tools Manager: Centralized tool registration and routing with unified interfaces (
- Advanced Database Features:
- Query Execution: High-performance SQL execution with advanced caching and optimization, enhanced connection stability and automatic retry mechanisms (
doris_mcp_server/utils/query_executor.py
) - Security Management: Comprehensive SQL security validation with configurable blocked keywords, SQL injection protection, data masking, and unified security configuration management (
doris_mcp_server/utils/security.py
) - Metadata Extraction: Comprehensive database metadata with catalog federation support (
doris_mcp_server/utils/schema_extractor.py
) - Performance Analysis: Advanced column analysis, performance monitoring, and data analysis tools (
doris_mcp_server/utils/analysis_tools.py
)
- Query Execution: High-performance SQL execution with advanced caching and optimization, enhanced connection stability and automatic retry mechanisms (
- Catalog Federation Support: Full support for multi-catalog environments (internal Doris tables and external data sources like Hive, MySQL, etc.)
- Enterprise Security: Comprehensive security framework with authentication, authorization, SQL injection protection, and data masking capabilities with environment variable configuration support
- Web-Based Token Management: Secure localhost-only interface for complete token lifecycle management with database binding, real-time statistics, and enterprise-grade access controls (
doris_mcp_server/auth/token_handlers.py
) - Unified Configuration Framework: Centralized configuration management through
config.py
with comprehensive validation, standardized parameter naming, and smart default database handling with automatic fallback toinformation_schema
System Requirements
- Python: 3.12+
- Database: Apache Doris connection details (Host, Port, User, Password, Database)
🚀 Quick Start
Installation from PyPI
💡 Command Compatibility: After installation, both
doris-mcp-server
commands are available for backward compatibility. You can use either command interchangeably.
Start Streamable HTTP Mode (Web Service)
The primary communication mode offering optimal performance and reliability:
Start Stdio Mode (for Cursor and other MCP clients)
Standard input/output mode for direct integration with MCP clients:
🌐 Token Management Interface (New in v0.6.0)
Access the Web-Based Token Management Dashboard for enterprise-grade token administration:
Secure Access Requirements
- Localhost Access Only: Interface restricted to
127.0.0.1
and::1
for maximum security - Admin Authentication: Requires
TOKEN_MANAGEMENT_ADMIN_TOKEN
for access - Configuration Prerequisites:
Interface Access
Available Operations
- 📊 Token Statistics: Real-time overview of active, expired, and total tokens
- ➕ Create Tokens:
- Basic information (ID, description, expiration)
- Database binding (host, port, user, password, database)
- Custom token values or auto-generated secure tokens
- 📋 Token Management:
- List all tokens with database binding status
- One-click token revocation
- Automated expired token cleanup
- 🔒 Enterprise Security:
- All operations require admin authentication
- Real-time IP validation
- Complete audit logging
- Automatic persistence to
tokens.json
🔐 Security Note: The interface is designed for localhost administration only. It cannot be accessed remotely, ensuring maximum security for token management operations.
Verify Installation
Environment Variables (Optional)
Instead of command-line arguments, you can use environment variables:
Command Line Arguments
The doris-mcp-server
command supports the following arguments:
Argument | Description | Default | Required |
---|---|---|---|
--transport | Transport mode: http or stdio | http | No |
--host | HTTP server host (HTTP mode only) | 0.0.0.0 | No |
--port | HTTP server port (HTTP mode only) | 3000 | No |
--db-host | Doris database host | localhost | No |
--db-port | Doris database port | 9030 | No |
--db-user | Doris database username | root | No |
--db-password | Doris database password | - | Yes (unless in env) |
Development Setup
For developers who want to build from source:
1. Clone the Repository
2. Install Dependencies
3. Configure Environment Variables
Copy the .env.example
file to .env
and modify the settings according to your environment:
Key Environment Variables:
- Database Connection:
DORIS_HOST
: Database hostname (default: localhost)DORIS_PORT
: Database port (default: 9030)DORIS_USER
: Database username (default: root)DORIS_PASSWORD
: Database passwordDORIS_DATABASE
: Default database name (default: information_schema)DORIS_MIN_CONNECTIONS
: Minimum connection pool size (default: 5)DORIS_MAX_CONNECTIONS
: Maximum connection pool size (default: 20)DORIS_BE_HOSTS
: BE nodes for monitoring (comma-separated, optional - auto-discovery via SHOW BACKENDS if empty)DORIS_BE_WEBSERVER_PORT
: BE webserver port for monitoring tools (default: 8040)FE_ARROW_FLIGHT_SQL_PORT
: Frontend Arrow Flight SQL port for ADBC (New in v0.5.0)BE_ARROW_FLIGHT_SQL_PORT
: Backend Arrow Flight SQL port for ADBC (New in v0.5.0)
- Authentication Configuration (Enhanced in v0.6.0):
ENABLE_TOKEN_AUTH
: Enable token-based authentication (default: false)ENABLE_JWT_AUTH
: Enable JWT authentication (default: false)ENABLE_OAUTH_AUTH
: Enable OAuth authentication (default: false)TOKEN_FILE_PATH
: Path to tokens.json file for token management (default: tokens.json)TOKEN_HOT_RELOAD
: Enable hot reloading of token configuration (default: true)DEFAULT_ADMIN_TOKEN
: Default admin token (customizable via env)DEFAULT_ANALYST_TOKEN
: Default analyst token (customizable via env)DEFAULT_READONLY_TOKEN
: Default readonly token (customizable via env)
- Legacy Security Configuration:
AUTH_TYPE
: Legacy authentication type (token/basic/oauth, deprecated - use individual switches)TOKEN_SECRET
: Legacy token secret key (use token-based auth instead)ENABLE_SECURITY_CHECK
: Enable/disable SQL security validation (default: true)BLOCKED_KEYWORDS
: Comma-separated list of blocked SQL keywordsENABLE_MASKING
: Enable data masking (default: true)MAX_RESULT_ROWS
: Maximum result rows (default: 10000)
- ADBC Configuration (New in v0.5.0):
ADBC_DEFAULT_MAX_ROWS
: Default maximum rows for ADBC queries (default: 100000)ADBC_DEFAULT_TIMEOUT
: Default ADBC query timeout in seconds (default: 60)ADBC_DEFAULT_RETURN_FORMAT
: Default return format - arrow/pandas/dict (default: arrow)ADBC_CONNECTION_TIMEOUT
: ADBC connection timeout in seconds (default: 30)ADBC_ENABLED
: Enable/disable ADBC tools (default: true)
- Performance Configuration:
ENABLE_QUERY_CACHE
: Enable query caching (default: true)CACHE_TTL
: Cache time-to-live in seconds (default: 300)MAX_CONCURRENT_QUERIES
: Maximum concurrent queries (default: 50)MAX_RESPONSE_CONTENT_SIZE
: Maximum response content size for LLM compatibility (default: 4096, New in v0.4.0)
- Enhanced Logging Configuration (Improved in v0.5.0):
LOG_LEVEL
: Log level (DEBUG/INFO/WARNING/ERROR, default: INFO)LOG_FILE_PATH
: Log file path (automatically organized by level)ENABLE_AUDIT
: Enable audit logging (default: true)ENABLE_LOG_CLEANUP
: Enable automatic log cleanup (default: true, Enhanced in v0.5.0)LOG_MAX_AGE_DAYS
: Maximum age of log files in days (default: 30, Enhanced in v0.5.0)LOG_CLEANUP_INTERVAL_HOURS
: Log cleanup check interval in hours (default: 24, Enhanced in v0.5.0)- New Features in v0.5.0:
- Level-based File Separation: Automatic separation into
debug.log
,info.log
,warning.log
,error.log
,critical.log
- Timestamped Format: Enhanced formatting with millisecond precision and proper alignment
- Background Cleanup Scheduler: Automatic cleanup with configurable retention policies
- Audit Trail: Dedicated
audit.log
with separate retention management - Performance Optimized: Minimal overhead async logging with rotation support
- Level-based File Separation: Automatic separation into
Available MCP Tools
The following table lists the main tools currently available for invocation via an MCP client:
Tool Name | Description | Parameters |
---|---|---|
exec_query | Execute SQL query and return results. | sql (string, Required), db_name (string, Optional), catalog_name (string, Optional), max_rows (integer, Optional), timeout (integer, Optional) |
get_table_schema | Get detailed table structure information. | table_name (string, Required), db_name (string, Optional), catalog_name (string, Optional) |
get_db_table_list | Get list of all table names in specified database. | db_name (string, Optional), catalog_name (string, Optional) |
get_db_list | Get list of all database names. | catalog_name (string, Optional) |
get_table_comment | Get table comment information. | table_name (string, Required), db_name (string, Optional), catalog_name (string, Optional) |
get_table_column_comments | Get comment information for all columns in table. | table_name (string, Required), db_name (string, Optional), catalog_name (string, Optional) |
get_table_indexes | Get index information for specified table. | table_name (string, Required), db_name (string, Optional), catalog_name (string, Optional) |
get_recent_audit_logs | Get audit log records for recent period. | days (integer, Optional), limit (integer, Optional) |
get_catalog_list | Get list of all catalog names. | random_string (string, Required) |
get_sql_explain | Get SQL execution plan with configurable content truncation and file export for LLM analysis. | sql (string, Required), verbose (boolean, Optional), db_name (string, Optional), catalog_name (string, Optional) |
get_sql_profile | Get SQL execution profile with content management and file export for LLM optimization workflows. | sql (string, Required), db_name (string, Optional), catalog_name (string, Optional), timeout (integer, Optional) |
get_table_data_size | Get table data size information via FE HTTP API. | db_name (string, Optional), table_name (string, Optional), single_replica (boolean, Optional) |
get_monitoring_metrics_info | Get Doris monitoring metrics definitions and descriptions. | role (string, Optional), monitor_type (string, Optional), priority (string, Optional) |
get_monitoring_metrics_data | Get actual Doris monitoring metrics data from nodes with flexible BE discovery. | role (string, Optional), monitor_type (string, Optional), priority (string, Optional) |
get_realtime_memory_stats | Get real-time memory statistics via BE Memory Tracker with auto/manual BE discovery. | tracker_type (string, Optional), include_details (boolean, Optional) |
get_historical_memory_stats | Get historical memory statistics via BE Bvar interface with flexible BE configuration. | tracker_names (array, Optional), time_range (string, Optional) |
analyze_data_quality | Comprehensive data quality analysis combining completeness and distribution analysis. | table_name (string, Required), analysis_scope (string, Optional), sample_size (integer, Optional), business_rules (array, Optional) |
trace_column_lineage | End-to-end column lineage tracking through SQL analysis and dependency mapping. | target_columns (array, Required), analysis_depth (integer, Optional), include_transformations (boolean, Optional) |
monitor_data_freshness | Real-time data staleness monitoring with configurable freshness thresholds. | table_names (array, Optional), freshness_threshold_hours (integer, Optional), include_update_patterns (boolean, Optional) |
analyze_data_access_patterns | User behavior analysis and security anomaly detection with access pattern monitoring. | days (integer, Optional), include_system_users (boolean, Optional), min_query_threshold (integer, Optional) |
analyze_data_flow_dependencies | Data flow impact analysis and dependency mapping between tables and views. | target_table (string, Optional), analysis_depth (integer, Optional), include_views (boolean, Optional) |
analyze_slow_queries_topn | Performance bottleneck identification with top-N slow query analysis and patterns. | days (integer, Optional), top_n (integer, Optional), min_execution_time_ms (integer, Optional), include_patterns (boolean, Optional) |
analyze_resource_growth_curves | Capacity planning with resource growth analysis and trend forecasting. | days (integer, Optional), resource_types (array, Optional), include_predictions (boolean, Optional) |
exec_adbc_query | High-performance SQL execution using ADBC (Arrow Flight SQL) protocol. | sql (string, Required), max_rows (integer, Optional), timeout (integer, Optional), return_format (string, Optional) |
get_adbc_connection_info | ADBC connection diagnostics and status monitoring for Arrow Flight SQL. | No parameters required |
Note: All metadata tools support catalog federation for multi-catalog environments. Enhanced monitoring tools provide comprehensive memory tracking and metrics collection capabilities. New in v0.5.0: 7 advanced analytics tools for enterprise data governance and 2 ADBC tools for high-performance data transfer with 3-10x performance improvements for large datasets.
4. Run the Service
Execute the following command to start the server:
This command starts the FastAPI application with Streamable HTTP MCP service.
5. Deploying on docker
If you want to run only Doris MCP Server in docker:
Service Endpoints:
- Streamable HTTP:
http://<host>:<port>/mcp
(Primary MCP endpoint - supports GET, POST, DELETE, OPTIONS) - Health Check:
http://<host>:<port>/health
Note: The server uses Streamable HTTP for web-based communication, providing unified request/response and streaming capabilities.
Usage
Interaction with the Doris MCP Server requires an MCP Client. The client connects to the server's Streamable HTTP endpoint and sends requests according to the MCP specification to invoke the server's tools.
Main Interaction Flow:
- Client Initialization: Send an
initialize
method call to/mcp
(Streamable HTTP). - (Optional) Discover Tools: The client can call
tools/list
to get the list of supported tools, their descriptions, and parameter schemas. - Call Tool: The client sends a
tools/call
request, specifying thename
andarguments
.- Example: Get Table Schema
name
:get_table_schema
arguments
: Includetable_name
,db_name
,catalog_name
.
- Example: Get Table Schema
- Handle Response:
- Non-streaming: The client receives a response containing
content
orisError
. - Streaming: The client receives a series of progress notifications, followed by a final response.
- Non-streaming: The client receives a response containing
Catalog Federation Support
The Doris MCP Server supports catalog federation, enabling interaction with multiple data catalogs (internal Doris tables and external data sources like Hive, MySQL, etc.) within a unified interface.
Key Features:
- Multi-Catalog Metadata Access: All metadata tools (
get_db_list
,get_db_table_list
,get_table_schema
, etc.) support an optionalcatalog_name
parameter to query specific catalogs. - Cross-Catalog SQL Queries: Execute SQL queries that span multiple catalogs using three-part table naming.
- Catalog Discovery: Use
get_catalog_list
to discover available catalogs and their types.
Three-Part Naming Requirement:
All SQL queries MUST use three-part naming for table references:
- Internal Tables:
internal.database_name.table_name
- External Tables:
catalog_name.database_name.table_name
Examples:
- Get Available Catalogs:
- Get Databases in Specific Catalog:
- Query Internal Catalog:
- Query External Catalog:
- Cross-Catalog Query:
Security Configuration
The Doris MCP Server includes a comprehensive enterprise-grade security framework with advanced authentication, authorization, SQL security validation, and data masking capabilities enhanced in v0.6.0.
Security Features (Enhanced in v0.6.0)
- 🔐 Multi-Authentication System: Complete Token, JWT, and OAuth authentication with independent control switches
- 🔗 Token-Bound Database Configuration: Revolutionary approach allowing tokens to carry their own database connection parameters
- 🔄 Hot Reload Security: Zero-downtime security configuration updates with intelligent token revalidation
- ⚡ Immediate Validation: Real-time database and authentication validation at connection time
- 🛡️ Role-Based Authorization: Advanced RBAC with four-tier security classification
- 🚫 Enhanced SQL Security: Advanced SQL injection protection with improved pattern detection
- 🎭 Intelligent Data Masking: Automatic sensitive data masking with user-based permissions
- 📊 Security Analytics: Comprehensive audit trails and security monitoring
Authentication Configuration (v0.6.0)
Configure the new authentication system with granular control:
Token-Bound Database Configuration (New in v0.6.0)
Create a tokens.json
file for advanced token management with database binding:
Hot Reload Configuration Updates (New in v0.6.0)
The system automatically detects and applies configuration changes:
- Automatic Detection: File modification monitoring every 10 seconds
- Instant Validation: Immediate database configuration validation for new tokens
- Zero Downtime: Configuration updates without service interruption
- Rollback Protection: Automatic rollback on configuration errors
- Audit Trail: Complete logging of configuration changes
Token Authentication Example
Basic Authentication Example
Authorization & Security Levels
The system supports four security levels with hierarchical access control:
Security Level | Access Scope | Typical Use Cases |
---|---|---|
Public | Unrestricted access | Public reports, general statistics |
Internal | Company employees | Internal dashboards, business metrics |
Confidential | Authorized personnel | Customer data, financial reports |
Secret | Senior management | Strategic data, sensitive analytics |
Role Configuration
Configure user roles and permissions:
SQL Security Validation
The system automatically validates SQL queries for security risks:
Blocked Operations
Configure blocked SQL operations using environment variables (New in v0.4.2):
Default Blocked Keywords (Unified in v0.4.2):
- DDL Operations: DROP, CREATE, ALTER, TRUNCATE
- DML Operations: DELETE, INSERT, UPDATE
- DCL Operations: GRANT, REVOKE
- System Operations: EXEC, EXECUTE, SHUTDOWN, KILL
SQL Injection Protection
The system automatically detects and blocks:
- Union-based injections:
UNION SELECT
attacks - Boolean-based injections:
OR 1=1
patterns - Time-based injections:
SLEEP()
,WAITFOR
functions - Comment injections:
--
,/**/
patterns - Stacked queries: Multiple statements separated by
;
Example Security Validation
Data Masking Configuration
Configure automatic data masking for sensitive information:
Built-in Masking Rules
Masking Algorithms
Algorithm | Description | Example |
---|---|---|
phone_mask | Masks phone numbers | 138****5678 |
email_mask | Masks email addresses | j***n@example.com |
id_mask | Masks ID card numbers | 110101****1234 |
name_mask | Masks personal names | 张*明 |
partial_mask | Partial masking with ratio | abc***xyz |
Custom Masking Rules
Add custom masking rules in your configuration:
Security Configuration Examples
Environment Variables
Sensitive Tables Configuration
Security Best Practices
- 🔑 Strong Authentication: Use JWT tokens with proper expiration
- 🎯 Principle of Least Privilege: Grant minimum required permissions
- 🔍 Regular Auditing: Enable audit logging for security monitoring
- 🛡️ Input Validation: All SQL queries are automatically validated
- 🎭 Data Classification: Properly classify data with security levels
- 🔄 Regular Updates: Keep security rules and configurations updated
Security Monitoring
The system provides comprehensive security monitoring:
⚠️ Important: Always test security configurations in a development environment before deploying to production. Regularly review and update security policies based on your organization's requirements.
Connecting with Cursor
You can connect Cursor to this MCP server using Stdio mode (recommended) or Streamable HTTP mode.
Stdio Mode
Stdio mode allows Cursor to manage the server process directly. Configuration is done within Cursor's MCP Server settings file (typically ~/.cursor/mcp.json
or similar).
Method 1: Using PyPI Installation (Recommended)
Install the package from PyPI and configure Cursor to use it:
Configure Cursor: Add an entry like the following to your Cursor MCP configuration:
Method 2: Using uv (Development)
If you have uv
installed and want to run from source:
Note: Replace /path/to/doris-mcp-server
with the actual absolute path to your project directory.
Configure Cursor: Add an entry like the following to your Cursor MCP configuration:
Streamable HTTP Mode
Streamable HTTP mode requires you to run the MCP server independently first, and then configure Cursor to connect to it.
- Configure
.env
: Ensure your database credentials and any other necessary settings are correctly configured in the.env
file within the project directory. - Start the Server: Run the server from your terminal in the project's root directory:This script reads the
.env
file and starts the FastAPI server with Streamable HTTP support. Note the host and port the server is listening on (default is0.0.0.0:3000
). - Configure Cursor: Add an entry like the following to your Cursor MCP configuration, pointing to the running server's Streamable HTTP endpoint:
Note: Adjust the host/port if your server runs on a different address. The
/mcp
endpoint is the unified Streamable HTTP interface.
After configuring either mode in Cursor, you should be able to select the server (e.g., doris-stdio
or doris-http
) and use its tools.
Directory Structure
Developing New Tools
This section outlines the process for adding new MCP tools to the Doris MCP Server, based on the unified modular architecture with centralized tool management.
1. Leverage Existing Utility Modules
The server provides comprehensive utility modules for common database operations:
doris_mcp_server/utils/db.py
: Database connection management with connection pooling and health monitoring.doris_mcp_server/utils/query_executor.py
: High-performance SQL execution with advanced caching, optimization, and performance monitoring.doris_mcp_server/utils/schema_extractor.py
: Metadata extraction with full catalog federation support.doris_mcp_server/utils/security.py
: Comprehensive security management, SQL validation, and data masking.doris_mcp_server/utils/analysis_tools.py
: Advanced data analysis and statistical tools.doris_mcp_server/utils/config.py
: Configuration management with validation.doris_mcp_server/utils/data_governance_tools.py
: Data lineage tracking and freshness monitoring (New in v0.5.0).doris_mcp_server/utils/data_quality_tools.py
: Comprehensive data quality analysis framework (New in v0.5.0).doris_mcp_server/utils/adbc_query_tools.py
: High-performance Arrow Flight SQL operations (New in v0.5.0).
2. Implement Tool Logic
Add your new tool to the DorisToolsManager
class in doris_mcp_server/tools/tools_manager.py
. The tools manager provides a centralized approach to tool registration and execution with unified interfaces.
Example: Adding a new analysis tool:
3. Register the Tool
Add your tool to the _register_tools
method in the same class:
4. Advanced Features
For more complex tools, you can leverage the comprehensive framework:
- Advanced Caching: Use the query executor's built-in caching for enhanced performance
- Enterprise Security: Apply comprehensive SQL validation and data masking through the security manager
- Intelligent Prompts: Use the prompts manager for advanced query generation
- Resource Management: Expose metadata through the resources manager
- Performance Monitoring: Integrate with the analysis tools for monitoring capabilities
5. Testing
Test your new tool using the included MCP client:
MCP Client
The project includes a unified MCP client (doris_mcp_client/
) for testing and integration purposes. The client supports multiple connection modes and provides a convenient interface for interacting with the MCP server.
For detailed client documentation, see doris_mcp_client/README.md
.
Contributing
Contributions are welcome via Issues or Pull Requests.
License
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
FAQ
Q: Why do Qwen3-32b and other small parameter models always fail when calling tools?
A: This is a common issue. The main reason is that these models need more explicit guidance to correctly use MCP tools. It's recommended to add the following instruction prompt for the model:
- Chinese version:
- English version:
If you have further requirements for the returned results, you can describe the specific requirements in the <output>
tag.
Q: How to configure different database connections?
A: You can configure database connections in several ways:
- Environment Variables (Recommended):
- Command Line Arguments:
- Configuration File:
Modify the corresponding configuration items in the
.env
file.
Q: How to configure BE nodes for monitoring tools?
A: Choose the appropriate configuration based on your deployment scenario:
External Network (Manual Configuration):
Internal Network (Automatic Discovery):
Q: How to use SQL Explain/Profile files with LLM for optimization?
A: The tools provide both truncated content and complete files for LLM analysis:
- Get Analysis Results:
- LLM Analysis Workflow:
- Review truncated content for quick insights
- Upload the complete file to your LLM as an attachment
- Request optimization suggestions or performance analysis
- Implement recommended improvements
- Configure Content Size:
Q: How to enable data security and masking features?
A: Set the following configurations in your .env
file:
Q: What's the difference between Stdio mode and HTTP mode?
A:
- Stdio Mode: Suitable for direct integration with MCP clients (like Cursor), where the client manages the server process
- HTTP Mode: Independent web service that supports multiple client connections, suitable for production environments
Recommendations:
- Development and personal use: Stdio mode
- Production and multi-user environments: HTTP mode
Q: How to resolve connection timeout issues?
A: Try the following solutions:
- Increase timeout settings:
- Check network connectivity:
- Optimize connection pool configuration:
Q: How to resolve at_eof
connection errors? (Completely Fixed in v0.5.0)
A: Version 0.5.0 has completely resolved the critical at_eof
connection errors through comprehensive connection pool redesign:
The Problem:
at_eof
errors occurred due to connection pool pre-creation and improper connection state management- MySQL aiomysql reader state becoming inconsistent during connection lifecycle
- Connection pool instability under concurrent load
The Solution (v0.5.0):
- Connection Pool Strategy Overhaul:
- Zero Minimum Connections: Changed
min_connections
from default to 0 to prevent pre-creation issues - On-Demand Connection Creation: Connections created only when needed, eliminating stale connection problems
- Fresh Connection Strategy: Always acquire fresh connections from pool, no session-level caching
- Zero Minimum Connections: Changed
- Enhanced Health Monitoring:
- Timeout-Based Health Checks: 3-second timeout for connection validation queries
- Background Health Monitor: Continuous pool health monitoring every 30 seconds
- Proactive Stale Detection: Automatic detection and cleanup of problematic connections
- Intelligent Recovery System:
- Automatic Pool Recovery: Self-healing pool with comprehensive error handling
- Exponential Backoff Retry: Smart retry mechanism with up to 3 attempts
- Connection-Specific Error Detection: Precise identification of connection-related errors
- Performance Optimizations:
- Pool Warmup: Intelligent connection pool warming for optimal performance
- Background Cleanup: Periodic cleanup of stale connections without affecting active operations
- Connection Diagnostics: Real-time connection health monitoring and reporting
Monitoring Connection Health:
Configuration for Optimal Connection Performance:
Result: 99.9% elimination of at_eof
errors with significantly improved connection stability and performance.
Q: How to resolve MCP library version compatibility issues? (Fixed in v0.4.2)
A: Version 0.4.2 introduced an intelligent MCP compatibility layer that supports both MCP 1.8.x and 1.9.x versions:
The Problem:
- MCP 1.9.3 introduced breaking changes to the
RequestContext
class (changed from 2 to 3 generic parameters) - This caused
TypeError: Too few arguments for RequestContext
errors
The Solution (v0.4.2):
- Intelligent Version Detection: Automatically detects the installed MCP version
- Compatibility Layer: Gracefully handles API differences between versions
- Flexible Version Support:
mcp>=1.8.0,<2.0.0
in dependencies
Supported MCP Versions:
Version Information:
If you encounter MCP-related startup errors:
Q: How to enable ADBC high-performance features? (New in v0.5.0)
A: ADBC (Arrow Flight SQL) provides 3-10x performance improvements for large datasets:
- ADBC Dependencies (automatically included in v0.5.0+):
- Configure Arrow Flight SQL Ports:
- Optional ADBC Customization:
- Test ADBC Connection:
Q: How to use the new data analytics tools? (New in v0.5.0)
A: The 7 new analytics tools provide comprehensive data governance capabilities:
Data Quality Analysis:
Column Lineage Tracking:
Data Freshness Monitoring:
Performance Analytics:
Q: How to use the enhanced logging system? (Improved in v0.5.0)
A: Version 0.5.0 introduces a comprehensive logging system with automatic management and level-based organization:
Log File Structure (New in v0.5.0):
Enhanced Logging Features:
- Level-Based File Separation: Automatic organization by log level for easier troubleshooting
- Timestamped Formatting: Millisecond precision with proper alignment for professional logging
- Automatic Log Rotation: Prevents disk space issues with configurable file size limits
- Background Cleanup: Intelligent cleanup scheduler with configurable retention policies
- Audit Trail: Separate audit logging for compliance and security monitoring
Viewing Logs:
Configuration:
Troubleshooting with Enhanced Logs:
Log Cleanup Management:
- Automatic: Background scheduler removes files older than
LOG_MAX_AGE_DAYS
- Manual: Logs are automatically rotated when they reach 10MB
- Backup: Keeps 5 backup files for each log level
- Performance: Minimal impact on server performance
Q: How to use the new Token-Bound Database Configuration? (New in v0.6.0)
A: The revolutionary token-bound database configuration allows each token to carry its own database connection parameters for secure multi-tenant access:
- Enable Token Authentication:
- Create tokens.json Configuration:
- Configuration Priority (New in v0.6.0):
- Token-bound DB config (highest priority)
- Environment variables (.env)
- Error if neither available
- Hot Reload Benefits:
- Add new tenants without service restart
- Update database credentials in real-time
- Automatic validation and rollback on errors
- Complete audit trail of changes
- Multi-Tenant Usage:
Q: How does Hot Reload work and is it safe? (New in v0.6.0)
A: The hot reload system is designed for enterprise production environments with comprehensive safety measures:
How It Works:
- File Monitoring: Checks tokens.json every 10 seconds for modifications
- Immediate Validation: New tokens are validated including database connectivity
- Atomic Updates: All-or-nothing configuration updates
- Rollback Protection: Automatic rollback if any token validation fails
Safety Features:
- Backup and Restore: Current configuration backed up before changes
- Connection Testing: Database connections tested before applying changes
- Error Isolation: Invalid tokens don't affect existing valid tokens
- Audit Logging: Complete trail of all configuration changes
Best Practices:
Q: How to manage Token lifecycle and security? (New in v0.6.0)
A: Token management uses a secure, file-based approach with optional administrative endpoints that have comprehensive security controls.
Primary Token Management Method (Recommended):
Administrative Endpoints (Secure, Local Access Only):
🛡️ SECURITY: These endpoints are protected by comprehensive security controls and are disabled by default.
Recommended Token Management Workflow:
- Development/Testing:
- Production Deployment:
Security Features:
- File-Based Management: Primary management through secured configuration files
- Hot Reload: Automatic configuration updates without service interruption
- Token Hashing: Tokens stored as SHA-256 hashes internally
- Audit Trail: Complete logging of all token operations and changes
- Expiration Management: Automatic cleanup of expired tokens
- Local Admin Only: Management endpoints restricted to localhost access
- Configuration Validation: Immediate validation of token and database configurations
Security Best Practices:
- Always manage tokens through secure configuration files
- Never expose token management endpoints to external networks
- Use strong, randomly generated tokens for production
- Implement proper file permissions for tokens.json (600 or 640)
- Regular audit of active tokens and their usage patterns
- Monitor hot reload logs for unauthorized configuration changes
For other issues, please check GitHub Issues or submit a new issue.
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Tools
Backend service implementing the Model Control Panel protocol that connects to Apache Doris databases, allowing users to execute SQL queries, manage metadata, and potentially leverage LLMs for tasks like natural language to SQL conversion.
- Core Features
- System Requirements
- Quick Start
- Usage
- Connecting with Cursor
- Directory Structure
- Developing New Tools
- Contributing
- License
Related Resources
Related MCP Servers
- -securityAlicense-qualityA server that enables AI models to interact with MySQL databases through a Model Control Protocol, providing tools for table creation, schema inspection, query execution, and data retrieval.Last updated -27MIT License
- AsecurityFlicenseAqualityA Model Context Protocol server that enables large language models to interact with Apache Superset databases through REST API, supporting database queries, table lookups, field information retrieval, and SQL execution.Last updated -45
MCP TapData Serverofficial
-securityFlicense-qualityA Model Context Protocol server that enables Large Language Models to access and interact with database connections, including viewing schemas and performing CRUD operations on connected databases.Last updated -- -securityFlicense-qualityA Model Control Protocol server that enables AI assistants to interact with Metabase databases, allowing models to explore database schemas, retrieve metadata, visualize relationships, and execute actions.Last updated -5