Includes support link integration for the project via Buy Me A Coffee, allowing users to financially support the development of the MCP server.
Planned for Q4 2025 to provide search-optimized AI interactions with Elasticsearch, enabling context-aware queries and operations.
Listed in the roadmap for Q3 2025 to add support for document-oriented schema understanding, enabling AI reasoning with MongoDB databases.
Enables AI agents to interact with MySQL databases by providing rich context about schema, relationships, and data patterns. Allows execution of SQL queries, data modification operations, and transaction management with full schema awareness.
Provides tools for AI-optimized interaction with PostgreSQL databases, including schema exploration, query execution, and performance analysis. Enables AI to understand and manipulate database structures with schema-aware context.
Planned integration in Q3 2025 roadmap that will provide key-value pattern recognition for AI when working with Redis databases.
Targeted for integration in Q3 2025 to support lightweight database understanding, enabling AI to work with SQLite databases with full context awareness.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Multi Database MCP Servershow me the top 5 customers by total sales from the mysql1 database"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Multi Database MCP Server
Overview
The DB MCP Server provides a standardized way for AI models to interact with multiple databases simultaneously. Built on the FreePeak/cortex framework, it enables AI assistants to execute SQL queries, manage transactions, explore schemas, and analyze performance across different database systems through a unified interface.
Related MCP server: MCP Server
Core Concepts
Multi-Database Support
Unlike traditional database connectors, DB MCP Server can connect to and interact with multiple databases concurrently:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "localhost",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "postgres1",
"type": "postgres",
"host": "localhost",
"port": 5432,
"name": "db2",
"user": "user2",
"password": "password2"
},
{
"id": "oracle1",
"type": "oracle",
"host": "localhost",
"port": 1521,
"service_name": "XEPDB1",
"user": "user3",
"password": "password3"
}
]
}Dynamic Tool Generation
For each connected database, the server automatically generates specialized tools:
// For a database with ID "mysql1", these tools are generated:
query_mysql1 // Execute SQL queries
execute_mysql1 // Run data modification statements
transaction_mysql1 // Manage transactions
schema_mysql1 // Explore database schema
performance_mysql1 // Analyze query performanceClean Architecture
The server follows Clean Architecture principles with these layers:
Domain Layer: Core business entities and interfaces
Repository Layer: Data access implementations
Use Case Layer: Application business logic
Delivery Layer: External interfaces (MCP tools)
Features
Simultaneous Multi-Database Support: Connect to multiple MySQL, PostgreSQL, SQLite, and Oracle databases concurrently
Lazy Loading Mode: Defer connection establishment until first use - perfect for setups with 10+ databases (enable with
--lazy-loadingflag)Database-Specific Tool Generation: Auto-creates specialized tools for each connected database
Clean Architecture: Modular design with clear separation of concerns
OpenAI Agents SDK Compatibility: Full compatibility for seamless AI assistant integration
Dynamic Database Tools: Execute queries, run statements, manage transactions, explore schemas, analyze performance
Unified Interface: Consistent interaction patterns across different database types
Connection Management: Simple configuration for multiple database connections
Health Check: Automatic validation of database connectivity on startup
Supported Databases
Database | Status | Features |
MySQL | ✅ Full Support | Queries, Transactions, Schema Analysis, Performance Insights |
PostgreSQL | ✅ Full Support (v9.6-17) | Queries, Transactions, Schema Analysis, Performance Insights |
SQLite | ✅ Full Support | File-based & In-memory databases, SQLCipher encryption support |
Oracle | ✅ Full Support (10g-23c) | Queries, Transactions, Schema Analysis, RAC, Cloud Wallet, TNS |
TimescaleDB | ✅ Full Support | Hypertables, Time-Series Queries, Continuous Aggregates, Compression, Retention Policies |
Deployment Options
The DB MCP Server can be deployed in multiple ways to suit different environments and integration needs:
Docker Deployment
# Pull the latest image
docker pull freepeak/db-mcp-server:latest
# Run with mounted config file
docker run -p 9092:9092 \
-v $(pwd)/config.json:/app/my-config.json \
-e TRANSPORT_MODE=sse \
-e CONFIG_PATH=/app/my-config.json \
freepeak/db-mcp-serverNote: Mount to
/app/my-config.jsonas the container has a default file at/app/config.json.
STDIO Mode (IDE Integration)
# Run the server in STDIO mode
./bin/server -t stdio -c config.jsonFor Cursor IDE integration, add to .cursor/mcp.json:
{
"mcpServers": {
"stdio-db-mcp-server": {
"command": "/path/to/db-mcp-server/server",
"args": ["-t", "stdio", "-c", "/path/to/config.json"]
}
}
}SSE Mode (Server-Sent Events)
# Default configuration (localhost:9092)
./bin/server -t sse -c config.json
# Custom host and port
./bin/server -t sse -host 0.0.0.0 -port 8080 -c config.jsonClient connection endpoint: http://localhost:9092/sse
Source Code Installation
# Clone the repository
git clone https://github.com/FreePeak/db-mcp-server.git
cd db-mcp-server
# Build the server
make build
# Run the server
./bin/server -t sse -c config.jsonConfiguration
Database Configuration File
Create a config.json file with your database connections:
{
"connections": [
{
"id": "mysql1",
"type": "mysql",
"host": "mysql1",
"port": 3306,
"name": "db1",
"user": "user1",
"password": "password1",
"query_timeout": 60,
"max_open_conns": 20,
"max_idle_conns": 5,
"conn_max_lifetime_seconds": 300,
"conn_max_idle_time_seconds": 60
},
{
"id": "postgres1",
"type": "postgres",
"host": "postgres1",
"port": 5432,
"name": "db1",
"user": "user1",
"password": "password1"
},
{
"id": "sqlite_app",
"type": "sqlite",
"database_path": "./data/app.db",
"journal_mode": "WAL",
"cache_size": 2000,
"read_only": false,
"use_modernc_driver": true,
"query_timeout": 30,
"max_open_conns": 1,
"max_idle_conns": 1
},
{
"id": "sqlite_encrypted",
"type": "sqlite",
"database_path": "./data/secure.db",
"encryption_key": "your-secret-key-here",
"journal_mode": "WAL",
"use_modernc_driver": false
},
{
"id": "sqlite_memory",
"type": "sqlite",
"database_path": ":memory:",
"cache_size": 1000,
"use_modernc_driver": true
}
]
}Command-Line Options
# Basic syntax
./bin/server -t <transport> -c <config-file>
# SSE transport options
./bin/server -t sse -host <hostname> -port <port> -c <config-file>
# Lazy loading mode (recommended for 10+ databases)
./bin/server -t stdio -c <config-file> --lazy-loading
# Customize log directory (useful for multi-project setups)
./bin/server -t stdio -c <config-file> -log-dir /tmp/db-mcp-logs
# Inline database configuration
./bin/server -t stdio -db-config '{"connections":[...]}'
# Environment variable configuration
export DB_CONFIG='{"connections":[...]}'
./bin/server -t stdioAvailable Flags:
-t, -transport: Transport mode (stdioorsse)-c, -config: Path to database configuration file-p, -port: Server port for SSE mode (default: 9092)-h, -host: Server host for SSE mode (default: localhost)-log-level: Log level (debug,info,warn,error)-log-dir: Directory for log files (default:./logsin current directory)-db-config: Inline JSON database configuration
SQLite Configuration Options
When using SQLite databases, you can leverage these additional configuration options:
SQLite Connection Parameters
Parameter | Type | Default | Description |
| string | Required | Path to SQLite database file or |
| string | - | Key for SQLCipher encrypted databases |
| boolean | false | Open database in read-only mode |
| integer | 2000 | SQLite cache size in pages |
| string | "WAL" | Journal mode: DELETE, TRUNCATE, PERSIST, WAL, OFF |
| boolean | true | Use modernc.org/sqlite (CGO-free) or mattn/go-sqlite3 |
SQLite Examples
Basic File Database
{
"id": "my_sqlite_db",
"type": "sqlite",
"database_path": "./data/myapp.db",
"journal_mode": "WAL",
"cache_size": 2000
}Encrypted Database (SQLCipher)
{
"id": "encrypted_db",
"type": "sqlite",
"database_path": "./data/secure.db",
"encryption_key": "your-secret-encryption-key",
"use_modernc_driver": false
}In-Memory Database
{
"id": "memory_db",
"type": "sqlite",
"database_path": ":memory:",
"cache_size": 1000
}Read-Only Database
{
"id": "reference_data",
"type": "sqlite",
"database_path": "./data/reference.db",
"read_only": true,
"journal_mode": "DELETE"
}Oracle Configuration Options
When using Oracle databases, you can leverage these additional configuration options:
Oracle Connection Parameters
Parameter | Type | Default | Description |
| string | Required | Oracle database host |
| integer | 1521 | Oracle listener port |
| string | - | Service name (recommended for RAC) |
| string | - | System identifier (legacy, use service_name instead) |
| string | Required | Database username |
| string | Required | Database password |
| string | - | Path to Oracle Cloud wallet directory |
| string | - | Path to directory containing tnsnames.ora |
| string | - | Named entry from tnsnames.ora |
| string | - | Edition-Based Redefinition edition name |
| boolean | false | Enable driver-level connection pooling |
| boolean | false | Allow queries on standby databases |
| string | AMERICAN_AMERICA.AL32UTF8 | Character set configuration |
Oracle Examples
Basic Oracle Connection (Development)
{
"id": "oracle_dev",
"type": "oracle",
"host": "localhost",
"port": 1521,
"service_name": "XEPDB1",
"user": "testuser",
"password": "testpass",
"max_open_conns": 50,
"max_idle_conns": 10,
"conn_max_lifetime_seconds": 1800
}Oracle with SID (Legacy)
{
"id": "oracle_legacy",
"type": "oracle",
"host": "oracledb.company.com",
"port": 1521,
"sid": "ORCL",
"user": "app_user",
"password": "app_password"
}Oracle Cloud Autonomous Database (with Wallet)
{
"id": "oracle_cloud",
"type": "oracle",
"user": "ADMIN",
"password": "your-cloud-password",
"wallet_location": "/path/to/wallet_DBNAME",
"service_name": "dbname_high"
}Oracle RAC (Real Application Clusters)
{
"id": "oracle_rac",
"type": "oracle",
"host": "scan.company.com",
"port": 1521,
"service_name": "production",
"user": "app_user",
"password": "app_password",
"max_open_conns": 100,
"max_idle_conns": 20
}Oracle with TNS Entry
{
"id": "oracle_tns",
"type": "oracle",
"tns_admin": "/opt/oracle/network/admin",
"tns_entry": "PROD_DB",
"user": "app_user",
"password": "app_password"
}Oracle with Edition-Based Redefinition
{
"id": "oracle_ebr",
"type": "oracle",
"host": "oracledb.company.com",
"port": 1521,
"service_name": "production",
"user": "app_user",
"password": "app_password",
"edition": "v2_0"
}Oracle Connection String Priority
When multiple connection methods are configured, the following priority is used:
TNS Entry (if
tns_entryandtns_adminare configured)Wallet (if
wallet_locationis configured) - for Oracle CloudStandard (host:port/service_name) - default method
Available Tools
For each connected database, DB MCP Server automatically generates these specialized tools:
Query Tools
Tool Name | Description |
| Execute SELECT queries and get results as a tabular dataset |
| Run data manipulation statements (INSERT, UPDATE, DELETE) |
| Begin, commit, and rollback transactions |
Schema Tools
Tool Name | Description |
| Get information about tables, columns, indexes, and foreign keys |
| Generate SQL or code from database schema |
Performance Tools
Tool Name | Description |
| Analyze query performance and get optimization suggestions |
TimescaleDB Tools
For PostgreSQL databases with TimescaleDB extension, these additional specialized tools are available:
Tool Name | Description |
| Perform general TimescaleDB operations |
| Convert a standard table to a TimescaleDB hypertable |
| List all hypertables in the database |
| Execute optimized time-series queries with bucketing |
| Analyze time-series data patterns |
| Create materialized views that automatically update |
| Manually refresh continuous aggregates |
For detailed documentation on TimescaleDB tools, see TIMESCALEDB_TOOLS.md.
Examples
Querying Multiple Databases
-- Query the MySQL database
query_mysql1("SELECT * FROM users LIMIT 10")
-- Query the PostgreSQL database in the same context
query_postgres1("SELECT * FROM products WHERE price > 100")
-- Query the SQLite database
query_sqlite_app("SELECT * FROM local_data WHERE created_at > datetime('now', '-1 day')")
-- Query the Oracle database
query_oracle_dev("SELECT * FROM employees WHERE hire_date > SYSDATE - 30")Managing Transactions
-- Start a transaction
transaction_mysql1("BEGIN")
-- Execute statements within the transaction
execute_mysql1("INSERT INTO orders (customer_id, product_id) VALUES (1, 2)")
execute_mysql1("UPDATE inventory SET stock = stock - 1 WHERE product_id = 2")
-- Commit or rollback
transaction_mysql1("COMMIT")
-- OR
transaction_mysql1("ROLLBACK")Exploring Database Schema
-- Get all tables in the database
schema_mysql1("tables")
-- Get columns for a specific table
schema_mysql1("columns", "users")
-- Get constraints
schema_mysql1("constraints", "orders")Working with SQLite-Specific Features
-- Create a table in SQLite
execute_sqlite_app("CREATE TABLE IF NOT EXISTS local_cache (key TEXT PRIMARY KEY, value TEXT, timestamp DATETIME)")
-- Use SQLite-specific date functions
query_sqlite_app("SELECT * FROM events WHERE date(created_at) = date('now')")
-- Query SQLite master table for schema information
query_sqlite_app("SELECT name, sql FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'")
-- Performance optimization with WAL mode
execute_sqlite_app("PRAGMA journal_mode = WAL")
execute_sqlite_app("PRAGMA synchronous = NORMAL")Working with Oracle-Specific Features
-- Query user tables (excludes system schemas)
query_oracle_dev("SELECT table_name FROM user_tables ORDER BY table_name")
-- Use Oracle-specific date functions
query_oracle_dev("SELECT employee_id, hire_date FROM employees WHERE hire_date >= TRUNC(SYSDATE, 'YEAR')")
-- Oracle sequence operations
execute_oracle_dev("CREATE SEQUENCE emp_seq START WITH 1000 INCREMENT BY 1")
query_oracle_dev("SELECT emp_seq.NEXTVAL FROM DUAL")
-- Oracle-specific data types
query_oracle_dev("SELECT order_id, TO_CHAR(order_date, 'YYYY-MM-DD HH24:MI:SS') FROM orders")
-- Get schema metadata from Oracle data dictionary
query_oracle_dev("SELECT column_name, data_type, nullable FROM user_tab_columns WHERE table_name = 'EMPLOYEES'")
-- Use Oracle analytic functions
query_oracle_dev("SELECT employee_id, salary, RANK() OVER (ORDER BY salary DESC) as salary_rank FROM employees")Troubleshooting
Common Issues
Connection Failures: Verify network connectivity and database credentials
Permission Errors: Ensure the database user has appropriate permissions
Timeout Issues: Check the
query_timeoutsetting in your configuration
Logs
Enable verbose logging for troubleshooting:
./bin/server -t sse -c config.json -vTesting
Running Tests
The project includes comprehensive unit and integration tests for all supported databases.
Unit Tests
Run unit tests (no database required):
make test
# or
go test -short ./...Integration Tests
Integration tests require running database instances. We provide Docker Compose configurations for easy setup.
Test All Databases:
# Start test databases
docker-compose -f docker-compose.test.yml up -d
# Run all integration tests
go test ./... -v
# Stop test databases
docker-compose -f docker-compose.test.yml down -vTest Oracle Database:
# Start Oracle test environment
./oracle-test.sh start
# Run Oracle tests
./oracle-test.sh test
# or manually
ORACLE_TEST_HOST=localhost go test -v ./pkg/db -run TestOracle
ORACLE_TEST_HOST=localhost go test -v ./pkg/dbtools -run TestOracle
# Stop Oracle test environment
./oracle-test.sh stop
# Full cleanup (removes volumes)
./oracle-test.sh cleanupTest TimescaleDB:
# Start TimescaleDB test environment
./timescaledb-test.sh start
# Run TimescaleDB tests
TIMESCALEDB_TEST_HOST=localhost go test -v ./pkg/db/timescale ./internal/delivery/mcp
# Stop TimescaleDB test environment
./timescaledb-test.sh stopRegression Tests
Run comprehensive regression tests across all database types:
# Ensure all test databases are running
docker-compose -f docker-compose.test.yml up -d
./oracle-test.sh start
# Run regression tests
MYSQL_TEST_HOST=localhost \
POSTGRES_TEST_HOST=localhost \
ORACLE_TEST_HOST=localhost \
go test -v ./pkg/db -run TestRegression
# Run connection pooling tests
go test -v ./pkg/db -run TestConnectionPoolingContinuous Integration
All tests run automatically on every pull request via GitHub Actions. The CI pipeline includes:
Unit Tests: Fast tests that don't require database connections
Integration Tests: Tests against MySQL, PostgreSQL, SQLite, and Oracle databases
Regression Tests: Comprehensive tests ensuring backward compatibility
Linting: Code quality checks with golangci-lint
Contributing
We welcome contributions to the DB MCP Server project! To contribute:
Fork the repository
Create a feature branch (
git checkout -b feature/amazing-feature)Commit your changes (
git commit -m 'feat: add amazing feature')Push to the branch (
git push origin feature/amazing-feature)Open a Pull Request
Please see our CONTRIBUTING.md file for detailed guidelines.
Testing Your Changes
Before submitting a pull request, please ensure:
All unit tests pass:
go test -short ./...Integration tests pass for affected databases
Code follows the project's style guidelines:
golangci-lint run ./...New features include appropriate test coverage
License
This project is licensed under the MIT License - see the LICENSE file for details.