Provides a unified interface for exploring and querying MySQL databases, allowing for schema discovery, relationship mapping, and SQL query execution with optional read-only protection.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MySQL MCP Servershow me the top 5 customers by total spend"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MySQL MCP Server
A Model Context Protocol (MCP) server for MySQL databases. This server provides a unified interface for exploring and querying MySQL databases through MCP.
Features
MySQL Support: Connect to MySQL databases
Unified Interface: Consistent tools and API for MySQL operations
Database-Specific Optimizations: Uses MySQL-optimized SQL syntax
Schema Exploration: List databases, tables, and relationships
Query Execution: Run SQL queries with proper parameter handling
Resource Support: MCP resource endpoints for table data
LangGraph Text-to-SQL Agent: Intelligent agent that converts natural language to SQL queries with automatic schema exploration and error recovery
Installation
Install dependencies:
The requirements include:
MySQL connector for database access
MCP and FastMCP for the server
LangChain and LangGraph for the text-to-SQL agent
LangChain OpenAI integration for LLM support
Usage
Connection Strings
MySQL
Command Line Options
Environment Variables
DATABASE_CONNECTION_STRING: MySQL connection stringDATABASE_READONLY: Set to "true", "1", or "yes" to enable read-only modeDATABASE_STATEMENT_TIMEOUT_MS: Query timeout in millisecondsMCP_TRANSPORT: Transport protocol (stdio, sse, streamable-http)MCP_HOST: Host for network transportsMCP_PORT: Port for network transportsMCP_SSE_MOUNT: Mount path for SSE transport
Read-Only Mode
Read-only mode prevents any write operations (INSERT, UPDATE, DELETE, DROP, etc.) and only allows SELECT, SHOW, WITH, VALUES, and EXPLAIN queries.
Enable via command-line:
Enable via environment variable:
Check if read-only mode is enabled:
LangGraph Text-to-SQL Agent
The project includes a sophisticated LangGraph-based agent that converts natural language questions into SQL queries. The agent automatically explores the database schema, generates SQL queries, executes them, and refines queries if errors occur.
Features
Intelligent Schema Exploration:
Automatically identifies relevant tables using LLM analysis (skips LLM call for ≤3 tables)
Only explores tables needed for the query (much faster!)
Describes table structures with exact column names
Fetches foreign key relationships for better JOINs
Caches schema information across queries (session-based)
Intelligent SQL Generation:
Uses LLM with chain-of-thought reasoning for step-by-step query generation
Handles multi-part questions (e.g., "find X and then find Y") with subqueries
Validates SQL syntax and auto-fixes simple issues
Confidence Scoring & Auto-Refinement:
Calculates confidence scores based on actual query execution results
Automatically refines queries when confidence is low or errors detected
Provides detailed analysis and reasoning for every query
Error Recovery:
Intelligently parses SQL errors to extract actionable information
Automatically retries with improved queries when execution fails
Preserves query structure when fixing simple errors (e.g., column names)
Performance Optimized:
Parallel execution of schema exploration operations
Compiled regex patterns for faster text processing
Reuses test query results when possible (avoids redundant executions)
State-based SQL storage (avoids re-extraction)
Set-based lookups for O(1) table/column checks
MCP Integration: Seamlessly uses MCP tools for database operations
Setup
Start the MCP Server (in one terminal):
Use the Agent (in Python/Jupyter):
Usage Examples
Basic Query
Query with Filtering
Aggregation Queries
How It Works
The agent follows an optimized workflow:
Schema Exploration (with caching and parallelization):
Lists all available tables (cached after first call)
Intelligently identifies relevant tables using LLM (skips for ≤3 tables)
Describes table structures for relevant tables only (parallel execution)
Fetches foreign key relationships for better JOINs (parallel execution)
Caches all schema information for subsequent queries
SQL Generation (with confidence scoring):
Uses LLM with chain-of-thought reasoning for step-by-step generation
Includes schema, foreign keys, and column names in prompt
Executes test query (LIMIT 5) to get sample results
Calculates confidence score and analysis (single LLM call)
Validates SQL syntax and detects critical issues
Sets refinement flags for edge routing (does not refine directly)
Stores final SQL in state to avoid re-extraction
SQL Refinement (if needed):
Separate node handles refinement when confidence is low or errors detected
Fetches missing schema if needed
Refines SQL using analysis and error context
Re-executes test query and recalculates confidence
Query Execution (optimized to avoid redundancy):
Uses stored SQL from state (avoids re-extraction)
Reuses test query results if they contain all data (avoids redundant execution)
Executes full query only when needed
Error Recovery (with intelligent parsing):
Intelligently parses SQL errors (extracts error type, column, table)
Passes error context to SQL generation (optional refinement)
Preserves query structure when fixing simple errors
Automatically retries failed queries (up to
max_query_attempts)
Agent State Graph
The agent uses a LangGraph state machine with the following nodes:
explore_schema: Discovers and caches database schema (with conditional routing for completion)
generate_sql: Converts natural language to SQL using LLM, calculates confidence, sets refinement flags
refine_sql: Refines SQL based on confidence score and analysis (separate node for clarity)
execute_query: Runs SQL queries via MCP tools (reuses test results when possible)
refine_query: Improves queries based on error feedback (routes back to generate_sql)
tools: Handles tool calls for schema exploration
Architecture: The agent follows LangGraph best practices with clear separation:
Nodes: Process state and return updates (no conditional logic)
Edges: Make routing decisions based on state flags (all orchestration logic)
For a detailed explanation of the architecture, workflow graph, and how to extend the agent, see AGENT_ARCHITECTURE.md.
Advanced Usage
Custom Configuration
Access Full State
Helper Function for Clean Results
Tips for Best Results
Be Specific: Clear, specific questions work best
✅ "Show me all authors born after 1950"
❌ "authors stuff"
Use Table Names: If you know table names, mention them
✅ "List all books in the authors table"
✅ "How many records are in the authors table?"
Specify Filters: Be explicit about filtering criteria
✅ "Find authors where birth_year is greater than 1950"
✅ "Show me authors with names starting with 'G'"
Ask for Aggregations: The agent handles COUNT, SUM, AVG, etc.
✅ "What is the average birth year?"
✅ "Count the total number of authors"
Limitations
Currently optimized for SELECT queries (read-only operations)
Maximum retry attempts are configurable (default: 3)
Requires OpenAI API key for LLM functionality
Schema information is cached across queries within the same agent instance for better performance
Improving the Agent
For a comprehensive guide on enhancing the agent's performance, accuracy, and features, see IMPROVEMENTS.md.
✅ Implemented Features:
Intelligent Table Selection: Uses LLM to identify only relevant tables (skips for ≤3 tables)
Foreign Key Relationships: Fetches FK info for better JOIN understanding
Schema Caching: Caches schema information across queries (session-based)
Chain-of-Thought Reasoning: Step-by-step query generation for better accuracy
Confidence Scoring: Calculates confidence based on actual query execution results
Auto-Refinement: Automatically improves queries when issues detected
Intelligent Error Parsing: Extracts actionable information from error messages
Performance Optimizations: Compiled regex, parallel execution, result reuse, state-based storage
Planned Improvements:
Few-Shot Examples: Add example queries to guide better SQL generation patterns
Query Explanation: Explain what generated SQL queries do
Query History and Learning: Learn from past successful queries
See IMPROVEMENTS.md for detailed implementation examples and step-by-step instructions.
MySQL Features
Database-level organization
SHOW DATABASESandSHOW TABLESfor performanceDESCRIBEfor table structureSHOW CREATE TABLEfor detailed table information
Tools
The server provides the following MCP tools:
server_info: Get server and database information (database type: MySQL, readonly status, MySQL connector version)db_identity: Get current database identity details (database type: MySQL, database name, user, host, port, server version)run_query: Execute SQL queries with typed input (returns markdown or JSON string)run_query_json: Execute SQL queries with typed input (returns JSON list)list_table_resources: List tables as MCP resource URIs (table://schema/table) - returns structured listread_table_resource: Read table data (rows) via MCP resource protocol - returns JSON listlist_tables: List tables in a database - returns markdown string (human-readable)describe_table: Get table structure (columns, types, constraints) - returns markdown stringget_foreign_keys: Get foreign key relationships (via SHOW CREATE TABLE)
Calling MCP Tools
MultiServerMCPClient doesn't have a call_tool() method. Instead, you need to:
Get tools using
get_tools()which returns LangChainStructuredToolobjectsFind the tool by name
Invoke it using
tool.ainvoke()with the appropriate arguments
Efficient Tool Lookup
For better performance (especially with many tools), create a dictionary for O(1) lookups:
Get Server Information
List Tables
Explore Table Structure
Execute Query
MCP Resource Tools
These tools implement the MCP (Model Context Protocol) resource pattern, which allows tables to be treated as discoverable resources that can be accessed via standardized URIs.
list_table_resources
Lists all tables in a schema and returns them as MCP resource URIs. This enables MCP clients to discover available tables dynamically.
What it does:
Queries the database to get all table names from the specified schema
Formats each table as a resource URI:
table://schema/table_nameReturns a list of these URIs
Example:
Use cases:
Dynamic table discovery in MCP clients
Building UI that lists available tables
Integration with MCP resource-aware tools
read_table_resource
Reads data from a specific table using the MCP resource protocol. This provides a standardized way to access table data.
What it does:
Executes
SELECT * FROM schema.tablewith a row limitReturns table rows as a list of dictionaries (JSON format)
Each dictionary represents one row with column names as keys
Example:
Use cases:
Quick table previews without writing SQL
MCP resource-aware clients that can fetch table data by URI
Data exploration and inspection tools
Key differences from
read_table_resource: Simple, standardized way to read entire tables (no SQL needed)run_query: Flexible, allows any SQL query with custom WHERE clauses, JOINs, etc.
Comparison: Resource Tools vs Regular Tools
list_table_resources vs list_tables
Both list tables, but serve different purposes:
Feature |
|
|
Return Type |
|
|
Format | Resource URIs: | Human-readable markdown table |
Use Case | Programmatic access, MCP resource protocol | Human viewing, documentation |
Integration | Works with MCP resource-aware clients | General purpose, readable output |
Example comparison:
read_table_resource vs describe_table
These serve completely different purposes - they're complementary, not redundant:
Feature |
|
|
What it returns | Table data (rows) | Table structure (schema) |
Return Type |
|
|
SQL Command |
|
|
Use Case | View actual data/rows | View column definitions, types, constraints |
Example Output |
| Column names, types, nullability, keys |
Example comparison:
Summary:
Use
read_table_resourcewhen you want to see the data in a tableUse
describe_tablewhen you want to see the schema/structure of a tableThey answer different questions: "What's in the table?" vs "How is the table defined?"
Notes
MySQL connection strings must start with
mysql://The format is:
mysql://user:password@host:port/databaseAll database names are treated as schemas for compatibility
Documentation
AGENT_ARCHITECTURE.md: Detailed architecture explanation, workflow graph, and extension guide
How the agent works internally
Visual workflow diagrams
Step-by-step guides for adding nodes and tools
Code examples and patterns
IMPROVEMENTS.md: Comprehensive improvement guide
Enhancement ideas with priority rankings
Implementation examples
Performance optimizations
Advanced features
Improving the Agent
For a comprehensive guide on enhancing the agent's performance, accuracy, and features, see IMPROVEMENTS.md.
The improvements guide includes:
Priority rankings (High/Medium/Low) for each improvement
Detailed code examples with implementation snippets
Step-by-step integration instructions
Workflow graphs showing how the agent evolves with improvements
Testing strategies and validation approaches
Benefits analysis for each enhancement
Quick Start: Begin with high-priority items like foreign key relationships, better error parsing, and column name suggestions for immediate impact.
Troubleshooting
Connection Issues
Ensure the MySQL server is running and accessible
Check connection string format and credentials
Verify network connectivity and firewall settings
MySQL-Specific Issues
Ensure MySQL server supports the connection protocol
Check user permissions for the specified database
Verify MySQL connector version compatibility