Debug MCP
An intelligent debugging assistant built on the Model Context Protocol (MCP) that helps automate the debugging process by analyzing bugs, injecting debug logs via HTTP, and iteratively fixing issues based on real-time feedback.
Key Features
๐ Automated Bug Analysis: Analyzes bug descriptions and suggests possible causes
๐ Multi-Environment Support: Automatically detects and adapts to different runtime environments
๐ HTTP-Based Logging: Sends debug logs via HTTP POST to a centralized server (NOT console.log)
๐ Iterative Debugging: Continues debugging based on user feedback until the issue is resolved
๐งน Auto Cleanup: Removes all debug code automatically after the bug is fixed
๐ Project-Scoped Logs: Logs are stored per-project in
{projectPath}/.debug/debug.log
How It Works
โ ๏ธ Important: This MCP server does NOT use console.log(). Instead, it injects code that sends logs via HTTP POST to a local debug server. The logs are then stored in your project directory at {projectPath}/.debug/debug.log.
Why HTTP-Based Logging?
Centralized collection: All logs from different parts of your application are collected in one place
Structured data: Logs are stored as JSON with timestamps, levels, and context
AI-friendly: The AI can easily read and analyze logs via the
read_debug_logstoolProject-scoped: Logs are stored in your project directory, not scattered across console outputs
Supported Environments
Environment | Description |
Browser | Web applications using fetch API |
Node.js | Server-side Node.js (18+) with native fetch |
Node.js Legacy | Older Node.js versions using http module |
React Native | Mobile apps using React Native |
Electron (Main) | Electron main process (direct file write) |
Electron (Renderer) | Electron renderer process (IPC) |
WeChat Mini Program | WeChat/Alipay mini programs (wx.request) |
PHP | Server-side PHP (curl) |
Python | Server-side Python (requests library) |
Java | Server-side Java using DebugHttpClient utility |
Android | Android apps with thread-safe network requests |
Kotlin | Kotlin applications with coroutine support |
Objective-C | iOS/macOS applications using NSURLSession |
Installation
Configuration
Port Configuration (Optional)
Important: The HTTP server port is now automatically assigned by the system. No manual configuration is needed. Each MCP instance automatically gets an available port to avoid port conflicts.
If you need to use a fixed port (not recommended), you can set it in environment variables:
Get Actual Port: Use the get_server_port MCP tool to query the currently assigned port and URL.
Quick Start Guide
Step 1: Start the MCP Server
The server will start:
MCP Server: Listening on stdio for AI communication
HTTP API Server: Automatically assigned an available port for debug logs (port info can be queried via
get_server_porttool)
Step 2: Configure MCP Client
Cursor IDE Configuration
In Cursor, MCP server configuration is in settings. Open Cursor settings, find the MCP configuration section, and add:
Note:
Replace
E:/work/debug-mcp/dist/index.jswith your actual pathPort is automatically assigned, no need to configure
DEBUG_PORTFor cross-device debugging, set
DEBUG_HOSTto0.0.0.0or your LAN IP
Claude Desktop Configuration
In Claude Desktop, the config file location:
Windows:
%APPDATA%\Claude\claude_desktop_config.jsonMac:
~/Library/Application Support/Claude/claude_desktop_config.jsonLinux:
~/.config/Claude/claude_desktop_config.json
Add the following configuration:
Note:
Replace
E:/work/debug-mcp/dist/index.jswith your actual pathPort is automatically assigned, no need to configure
DEBUG_PORTFor cross-device debugging, set
DEBUG_HOSTto0.0.0.0or your LAN IP
Cross-Device Debugging Setup:
To enable debugging from mobile devices or other computers on your network:
Find your LAN IP:
Windows:
ipconfigโ look for "IPv4 Address" (e.g., 192.168.1.100)Mac/Linux:
ifconfigorip addrโ look for "inet" (e.g., 192.168.1.100)
Update MCP config:
{ "env": { "DEBUG_HOST": "192.168.1.100" // Your LAN IP // Note: Port is automatically assigned, no need to configure DEBUG_PORT } }Get actual port: After starting the MCP server, use the
get_server_porttool to query the actual assigned portEnsure firewall allows that port (port number queried via
get_server_port)Devices can now send logs to
http://192.168.1.100:PORT/api/log(PORT is the actual assigned port)
Step 3: Use with AI
When debugging, simply describe the bug to the AI. The AI will:
Analyze the bug using
analyze_bugtoolDetect the environment of your files using
detect_environmentGet debug template using
get_debug_templatetoolManually insert the debug code into your files
Important for AI:
โ ๏ธ DO NOT use
console.log()- use the template code which sends logs via HTTP POSTโ ๏ธ ALWAYS provide
projectPathas the absolute path to the project directoryThe debug code sends logs to the dynamically assigned server URL (use
get_server_porttool to get the actual URL)For cross-device debugging, the URL will use the configured
DEBUG_HOSTand automatically assigned portLogs are stored at
{projectPath}/.debug/debug.log
Example Workflow
Key AI Behaviors
When using this MCP server, the AI should follow this pattern:
โ DO:
Always provide projectPath as the absolute path to the project directory
Use get_debug_template to get HTTP-based logging code (NOT console.log)
Manually insert the debug code at the appropriate location
Explain your hypothesis - what you think might be wrong
Provide clear test steps - exact actions the user should take
State expected results - what should happen if your hypothesis is correct
Ask for feedback - specifically request user to report results
Use read_debug_logs after user tests to analyze actual runtime data
Handle unexpected results - ask follow-up questions when results differ from expectations
Iterate - continue debugging based on data until the issue is resolved
Auto-cleanup on success - When user says "It works!", "Fixed!", "Success!", etc., IMMEDIATELY:
Use
clear_debug_logsto wipe the log fileRemove all debug code blocks (search for
DEBUG CODE START/ENDmarkers)Revert ALL temporary visual/test markers (search for
TEMPORARY DEBUG MARKERcomments)Restore original code (button labels, images, styles, colors, etc.)
DO NOT ask "Should I clean up?" - just do it directly
Track ALL modifications - Keep a list of every change:
Debug code blocks (HTTP POST logs)
Visual markers (test images, button text changes, color highlights)
Style modifications (CSS changes for testing)
Any other temporary changes
โ DON'T:
โ Use console.log() - always use the provided HTTP POST templates
โ Omit projectPath - logs will go to wrong directory
โ Skip explaining your reasoning - user needs to understand your hypothesis
โ Forget test steps - user needs clear instructions
โ Ignore unexpected results - investigate when things don't work as planned
โ Ask "Should I clean up?" when user confirms success - just clean up directly
โ Forget temporary visual markers - ALL test changes must be reverted
Available MCP Tools
get_server_info
Get server configuration, HTTP endpoints, and supported environments.
get_server_port
Get the current HTTP server port and URL information. The port is automatically assigned by the system. Use this tool to query the actual assigned port number and complete server URL.
Returns:
port: Currently assigned port numberhost: Server host addressurl: Complete log endpoint URLbaseUrl: Server base URLendpoints: All available API endpoints
Use Cases:
When you need to know the actual port number
For cross-device debugging, need to inform other devices of the URL
To verify the server has started correctly
analyze_bug
Analyze a bug description and get intelligent suggestions about possible causes.
detect_environment
Automatically detect the runtime environment of a file.
Returns: Environment type (browser, node, python, etc.) with confidence level
get_debug_template
Get debug code template for a specific environment. This is the main tool for adding debug logs.
โ ๏ธ IMPORTANT:
The returned code uses HTTP POST, NOT
console.log()projectPathis REQUIRED - use the absolute path to your project directoryDO NOT modify the generated code to use
console.log()Manually insert the code into your file at the appropriate location
Example returned code:
read_debug_logs
Read debug logs from the project.
Returns: Array of log entries with timestamps, levels, messages, and data
list_debug_blocks
List all debug blocks (code between debug-start and debug-end markers) in project files.
clear_debug_logs
Clear all debug logs from the project log file.
Complete Debugging Session Example
Here's a complete example of how a debugging session should flow:
Initial Problem Report
User: "My React app's login form isn't submitting. When I click the submit button, nothing happens."
Step 1: Analysis Phase
AI: Let me analyze this issue and check your environment.
Uses:
"Based on your description, the possible causes are:
Event listener not attached to the submit button
Form validation preventing submission
Submit handler function has an error
I've detected you're using a Browser environment with React."
Step 2: First Debug Code Injection
AI: I'll add a log to check if the button click event is firing.
Uses:
Step 3: AI Provides Test Instructions
AI: ๐ญ My hypothesis: The submit button's click handler might not be attached, or there's an early return preventing the handler from executing.
๐ Test steps:
Restart your React development server
Open the login page in your browser
Fill in any required fields
Click the submit button
Check the browser console for any errors
โ Expected results: If the handler is working, you should see a debug log entry indicating the button was clicked.
๐ข Please report back: What happened when you clicked? Did you see any console errors?
Step 4: User Reports Back
User: "I clicked the button but nothing happened. No errors in console though."
Step 5: AI Analyzes Logs
AI: Let me check the debug logs to see what was captured.
Uses: `read_debug_logs(projectPath="/Users/john/my-app")
"I don't see any log entries for the button click. This confirms the click handler isn't being called at all."
Step 6: Second Iteration
AI: The event listener isn't firing. Let me add a log to check if the component is mounting correctly, and also verify the event handler is attached.
Uses:
AI: ๐ญ New hypothesis: The component might not be mounting, or the JSX might have an error with the event handler binding.
๐ Test steps:
Refresh the page
Check if you see a "Login component mounted" log
Try clicking submit again
โ Expected results: You should see the component mount log, which will tell us if the component loaded.
Step 7: User Provides Unexpected Results
User: "I see the mount log! But when I click submit, I get 'handleSubmit is not defined' in the console."
Step 8: AI Investigates Unexpected Result
AI: Ah! That's very helpful. The error "handleSubmit is not defined" means the function isn't being properly bound or imported. Let me check your component code...
[AI reads the file and identifies the issue]
"I found the problem! You have onClick={handleSubmit} but the function is defined as const handleLoginSubmit = (...). The names don't match. Let me fix that."
Step 9: Resolution
AI: I'll correct the function name in your JSX.
[AI fixes the code]
๐ก Root cause: Function name mismatch - JSX references handleSubmit but the actual function is named handleLoginSubmit
๐ Final test steps:
Restart the server
Try submitting the form again
Verify it works correctly
โ Expected results: The form should now submit successfully!
Step 10: Auto-Cleanup (Including Visual Markers)
User: "It works! Thanks!"
AI: [Detects success confirmation and reviews all modifications]
Great! Let me clean up all debug code and temporary changes right away.
Uses:
"โ All debug logs have been cleared
โ Debug code removed from:
src/components/Login.js (lines 15-25) - HTTP POST logs
src/api/auth.js (lines 42-50) - HTTP POST logs
โ Temporary visual markers reverted:
src/components/Login.js - Button label changed from 'TEST-LOGIN' back to 'Login'
src/styles/button.css - Background color changed from 'red' back to 'blue'
src/assets/test-image.png - Removed test image reference
Your app is now clean and ready to go!"
Key Improvements in This Example:
โ AI always explained its hypothesis before adding code
โ AI provided clear, numbered test steps
โ AI stated expected results clearly
โ AI asked for specific feedback from the user
โ AI used read_debug_logs to verify what actually happened
โ AI investigated unexpected results (the function name error)
โ AI iterated based on data until finding the root cause
โ AI auto-cleaned immediately when user confirmed success - NO asking "Should I clean up?"
Success Detection Triggers
AI should auto-cleanup when user says:
โ "It works!"
โ "Fixed!"
โ "Success!"
โ "Great!"
โ "Thanks!"
โ "Perfect!"
โ "That solved it"
โ "Working now"
AI should continue debugging when user says:
โ "Still not working"
โ "Same error"
โ "Didn't help"
โ "Nothing changed"
โ "Getting a different error"
Temporary Modification Marking
When making ANY temporary changes for debugging purposes, you MUST mark them clearly for cleanup:
Types of Temporary Changes
1. Debug Code (Automatic)
Already wrapped in DEBUG CODE START/END markers:
2. Visual/Test Markers (Manual - MUST ADD)
Button text changes:
Test images:
Color highlights:
Placeholder text:
Test flags:
Cleanup Checklist
When user confirms success, check and revert:
โ Debug Code:
Search for
DEBUG CODE STARTand remove all blocksClear debug logs using
clear_debug_logs
โ Visual Markers:
Search for
TEMPORARY DEBUG MARKERcommentsRevert button labels to original
Remove test images
Restore original colors/styles
Remove placeholder text
Delete test flags/variables
โ Verify:
App looks and behaves exactly as before debugging
No debug-related comments left
No test assets referenced
Cross-Device Debugging
Debug mobile apps and other devices on your network by configuring the server host.
When to Use Cross-Device Debugging
๐ฑ Mobile Web Apps: Debug mobile browsers from your development machine
๐ฒ React Native Apps: Test on physical devices while capturing logs
๐ Multiple Devices: Test your web app on phones, tablets, and computers simultaneously
๐ Local Network Testing: Test on devices without deploying to production
Setup Guide
1. Find Your Computer's LAN IP
Windows:
Look for "IPv4 Address" โ e.g., 192.168.1.100
Mac/Linux:
Look for "inet" โ e.g., 192.168.1.100
2. Configure MCP Server
Update your MCP client config:
Or use 0.0.0.0 to accept connections from any device:
3. Configure Firewall (if needed)
Windows:
Mac/Linux:
4. Test Connection
From another device on your network:
Should return: {"status":"ok"}
Usage Example
Scenario: Debugging a mobile web app
Configure server with LAN IP:
DEBUG_HOST=192.168.1.100AI generates debug code with the correct URL:
// Use get_server_port tool to get actual port, then replace PORT fetch('http://192.168.1.100:PORT/api/log', { method: 'POST', body: JSON.stringify({ message: 'Button clicked' }) });Open your web app on mobile device using:
http://192.168.1.100:3000Test the app on your phone - logs are sent to your computer
AI reads logs from your computer:
read_debug_logs(projectPath="/path/to/project")
Host Configuration Options
DEBUG_HOST Value | Description | Use Case |
| Only local connections | Default, local debugging |
| Accept from any device | Flexible testing |
| Your specific LAN IP | Explicit, recommended for mobile |
| Localhost only | Same as localhost |
Troubleshooting
Cannot connect from mobile device:
Verify devices are on the same network
Use
get_server_porttool to query the actual portCheck firewall settings for that port
Confirm the MCP server is running
Test with curl:
curl http://YOUR_IP:PORT/health(PORT is the actual port)
Logs not appearing:
Check the generated code uses the correct URL
Verify projectPath is set correctly
Check browser console for network errors
Ensure the device can reach your computer
HTTP API Endpoints
The debug server runs an HTTP API server on an automatically assigned port. Use the get_server_port MCP tool to query the actual port and URL.
POST /api/log
Receives debug log entries from running applications.
Example:
GET /api/log
Retrieves debug logs.
DELETE /api/log
Clears all debug logs.
GET /api/stats
Gets log statistics.
GET /health
Health check endpoint.
Environment-Specific Examples
Browser / Node.js (18+)
Node.js Legacy (v14-17)
Python
PHP
WeChat Mini Program
Java
Android
Kotlin
Objective-C (iOS/macOS)
Backup Files
Before modifying any file, the tool creates a backup:
These backups can be used to restore files if needed.
Troubleshooting
Debug logs not appearing
Use
get_server_porttool to query the actual portVerify the HTTP server is running:
curl http://localhost:PORT/health(PORT is the actual port)Check if the application can reach the server URL
Look for
[Debug Log Failed]errors in the application console
Environment detection fails
Manually specify the environment in
add_debug_logsCheck that file content is not empty
Verify the file extension matches the environment
Debug code not removed
Ensure
// debug-startand// debug-endmarkers are presentCheck file permissions
Use
list_debug_blocksto see what will be removed
Architecture
Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
License
MIT