# Debug MCP
An intelligent debugging assistant built on the Model Context Protocol (MCP) that helps automate the debugging process by analyzing bugs, injecting debug logs via HTTP, and iteratively fixing issues based on real-time feedback.
## Key Features
- 🔍 **Automated Bug Analysis**: Analyzes bug descriptions and suggests possible causes
- 🌐 **Multi-Environment Support**: Automatically detects and adapts to different runtime environments
- 📝 **HTTP-Based Logging**: Sends debug logs via HTTP POST to a centralized server (NOT console.log)
- 🔄 **Iterative Debugging**: Continues debugging based on user feedback until the issue is resolved
- 🧹 **Auto Cleanup**: Removes all debug code automatically after the bug is fixed
- 📊 **Project-Scoped Logs**: Logs are stored per-project in `{projectPath}/.debug/debug.log`
## How It Works
**⚠️ Important**: This MCP server does **NOT** use `console.log()`. Instead, it injects code that sends logs via HTTP POST to a local debug server. The logs are then stored in your project directory at `{projectPath}/.debug/debug.log`.
### Why HTTP-Based Logging?
1. **Centralized collection**: All logs from different parts of your application are collected in one place
2. **Structured data**: Logs are stored as JSON with timestamps, levels, and context
3. **AI-friendly**: The AI can easily read and analyze logs via the `read_debug_logs` tool
4. **Project-scoped**: Logs are stored in your project directory, not scattered across console outputs
## Supported Environments
| Environment | Description |
|------------|-------------|
| **Browser** | Web applications using fetch API |
| **Node.js** | Server-side Node.js (18+) with native fetch |
| **Node.js Legacy** | Older Node.js versions using http module |
| **React Native** | Mobile apps using React Native |
| **Electron (Main)** | Electron main process (direct file write) |
| **Electron (Renderer)** | Electron renderer process (IPC) |
| **WeChat Mini Program** | WeChat/Alipay mini programs (wx.request) |
| **PHP** | Server-side PHP (curl) |
| **Python** | Server-side Python (requests library) |
| **Java** | Server-side Java using DebugHttpClient utility |
| **Android** | Android apps with thread-safe network requests |
| **Kotlin** | Kotlin applications with coroutine support |
| **Objective-C** | iOS/macOS applications using NSURLSession |
## Installation
```bash
# Clone the repository
git clone https://gitee.com/UPUP0326/debug-mcp.git
cd debug-mcp
# Install dependencies
npm install
# Build the project
npm run build
# Start the server
npm start
```
## Configuration
### Port Configuration (Optional)
**Important**: The HTTP server port is now automatically assigned by the system. No manual configuration is needed. Each MCP instance automatically gets an available port to avoid port conflicts.
If you need to use a fixed port (not recommended), you can set it in environment variables:
```env
# HTTP Server Port (optional, auto-assigned by default)
# If set, this port will be used; if not set, system automatically assigns an available port
DEBUG_PORT=37373
# HTTP Server Host (default: localhost)
# Use '0.0.0.0' to accept connections from any device on your network
# Use your LAN IP (e.g., '192.168.1.100') to allow other devices to send logs
DEBUG_HOST=localhost
# Examples:
# DEBUG_HOST=0.0.0.0 # Accept connections from any device
# DEBUG_HOST=192.168.1.100 # Your computer's LAN IP
# DEBUG_HOST=localhost # Only local connections (default)
# Log file path (relative to project directory)
LOG_FILE=.debug/debug.log
```
**Get Actual Port**: Use the `get_server_port` MCP tool to query the currently assigned port and URL.
## Quick Start Guide
### Step 1: Start the MCP Server
```bash
npm start
```
The server will start:
- **MCP Server**: Listening on stdio for AI communication
- **HTTP API Server**: Automatically assigned an available port for debug logs (port info can be queried via `get_server_port` tool)
### Step 2: Configure MCP Client
#### Cursor IDE Configuration
In Cursor, MCP server configuration is in settings. Open Cursor settings, find the MCP configuration section, and add:
```json
{
"mcpServers": {
"debug-mcp": {
"command": "node",
"args": ["E:/work/debug-mcp/dist/index.js"],
"env": {
"DEBUG_HOST": "localhost"
}
}
}
}
```
**Note**:
- Replace `E:/work/debug-mcp/dist/index.js` with your actual path
- Port is automatically assigned, no need to configure `DEBUG_PORT`
- For cross-device debugging, set `DEBUG_HOST` to `0.0.0.0` or your LAN IP
#### Claude Desktop Configuration
In Claude Desktop, the config file location:
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **Mac**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
Add the following configuration:
```json
{
"mcpServers": {
"debug-mcp": {
"command": "node",
"args": ["E:/work/debug-mcp/dist/index.js"],
"env": {
"DEBUG_HOST": "localhost"
}
}
}
}
```
**Note**:
- Replace `E:/work/debug-mcp/dist/index.js` with your actual path
- Port is automatically assigned, no need to configure `DEBUG_PORT`
- For cross-device debugging, set `DEBUG_HOST` to `0.0.0.0` or your LAN IP
**Cross-Device Debugging Setup**:
To enable debugging from mobile devices or other computers on your network:
1. **Find your LAN IP**:
- Windows: `ipconfig` → look for "IPv4 Address" (e.g., 192.168.1.100)
- Mac/Linux: `ifconfig` or `ip addr` → look for "inet" (e.g., 192.168.1.100)
2. **Update MCP config**:
```json
{
"env": {
"DEBUG_HOST": "192.168.1.100" // Your LAN IP
// Note: Port is automatically assigned, no need to configure DEBUG_PORT
}
}
```
3. **Get actual port**: After starting the MCP server, use the `get_server_port` tool to query the actual assigned port
4. **Ensure firewall allows that port** (port number queried via `get_server_port`)
5. **Devices can now send logs to** `http://192.168.1.100:PORT/api/log` (PORT is the actual assigned port)
### Step 3: Use with AI
When debugging, simply describe the bug to the AI. The AI will:
1. **Analyze the bug** using `analyze_bug` tool
2. **Detect the environment** of your files using `detect_environment`
3. **Get debug template** using `get_debug_template` tool
4. **Manually insert the debug code** into your files
**Important for AI**:
- ⚠️ **DO NOT** use `console.log()` - use the template code which sends logs via HTTP POST
- ⚠️ **ALWAYS** provide `projectPath` as the absolute path to the project directory
- The debug code sends logs to the dynamically assigned server URL (use `get_server_port` tool to get the actual URL)
- For cross-device debugging, the URL will use the configured `DEBUG_HOST` and automatically assigned port
- Logs are stored at `{projectPath}/.debug/debug.log`
### Example Workflow
```
User: "My login button doesn't work when I click it"
AI Process:
1. analyze_bug("Login button doesn't work")
→ Returns: Possible causes (event listener, API error, validation)
2. detect_environment("src/components/Login.js")
→ Returns: "browser"
3. get_debug_template(
environment="browser",
logMessage="Login button clicked",
variables=["username", "password"],
projectPath="/path/to/project"
)
→ Returns: HTTP-based debug code (NOT console.log)
4. AI inserts the debug code into Login.js manually
⚠️ CRITICAL STEP - AI informs the user:
💭 My hypothesis: The button's click event listener might not be attached properly.
📋 Test steps:
1. Restart your application
2. Navigate to the login page
3. Click the login button
4. Check if anything happens
✅ Expected results: You should see the button click logged in the debug system
📢 Please report back: Did the button respond? Any error messages in console?
5. User tests and reports: "Nothing happened when I clicked"
6. read_debug_logs(projectPath="/path/to/project")
→ Returns: No log entries for button click
7. AI analyzes: "Since we don't see the button click log, the event listener
isn't firing. Let me add more logs to check if the component mounted..."
8. [Repeat debugging cycle with targeted hypotheses]
9. User reports: "Now I see the log! But getting 401 error"
10. AI analyzes logs and suggests: "The 401 error indicates authentication failed.
Let me check the API request format..."
11. [Continue until bug is fixed]
```
## Key AI Behaviors
When using this MCP server, the AI should follow this pattern:
### ✅ DO:
1. **Always provide projectPath** as the absolute path to the project directory
2. **Use get_debug_template** to get HTTP-based logging code (NOT console.log)
3. **Manually insert** the debug code at the appropriate location
4. **Explain your hypothesis** - what you think might be wrong
5. **Provide clear test steps** - exact actions the user should take
6. **State expected results** - what should happen if your hypothesis is correct
7. **Ask for feedback** - specifically request user to report results
8. **Use read_debug_logs** after user tests to analyze actual runtime data
9. **Handle unexpected results** - ask follow-up questions when results differ from expectations
10. **Iterate** - continue debugging based on data until the issue is resolved
11. **Auto-cleanup on success** - When user says "It works!", "Fixed!", "Success!", etc., IMMEDIATELY:
- Use `clear_debug_logs` to wipe the log file
- Remove all debug code blocks (search for `DEBUG CODE START/END` markers)
- Revert ALL temporary visual/test markers (search for `TEMPORARY DEBUG MARKER` comments)
- Restore original code (button labels, images, styles, colors, etc.)
- **DO NOT ask** "Should I clean up?" - just do it directly
12. **Track ALL modifications** - Keep a list of every change:
- Debug code blocks (HTTP POST logs)
- Visual markers (test images, button text changes, color highlights)
- Style modifications (CSS changes for testing)
- Any other temporary changes
### ❌ DON'T:
1. ❌ Use console.log() - always use the provided HTTP POST templates
2. ❌ Omit projectPath - logs will go to wrong directory
3. ❌ Skip explaining your reasoning - user needs to understand your hypothesis
4. ❌ Forget test steps - user needs clear instructions
5. ❌ Ignore unexpected results - investigate when things don't work as planned
6. ❌ Ask "Should I clean up?" when user confirms success - just clean up directly
7. ❌ Forget temporary visual markers - ALL test changes must be reverted
## Available MCP Tools
### get_server_info
Get server configuration, HTTP endpoints, and supported environments.
```json
{}
```
### get_server_port
Get the current HTTP server port and URL information. The port is automatically assigned by the system. Use this tool to query the actual assigned port number and complete server URL.
```json
{}
```
**Returns**:
- `port`: Currently assigned port number
- `host`: Server host address
- `url`: Complete log endpoint URL
- `baseUrl`: Server base URL
- `endpoints`: All available API endpoints
**Use Cases**:
- When you need to know the actual port number
- For cross-device debugging, need to inform other devices of the URL
- To verify the server has started correctly
### analyze_bug
Analyze a bug description and get intelligent suggestions about possible causes.
```json
{
"bugDescription": "Login button doesn't respond when clicked",
"files": ["login.js", "auth.js"]
}
```
### detect_environment
Automatically detect the runtime environment of a file.
```json
{
"filePath": "src/components/Login.js"
}
```
**Returns**: Environment type (browser, node, python, etc.) with confidence level
### get_debug_template
Get debug code template for a specific environment. **This is the main tool for adding debug logs.**
```json
{
"environment": "browser",
"logMessage": "Login button clicked",
"variables": ["username", "password"],
"projectPath": "/Users/username/my-project",
"level": "info"
}
```
**⚠️ IMPORTANT**:
- The returned code uses **HTTP POST**, NOT `console.log()`
- `projectPath` is **REQUIRED** - use the absolute path to your project directory
- DO NOT modify the generated code to use `console.log()`
- Manually insert the code into your file at the appropriate location
**Example returned code**:
```javascript
// ==================== DEBUG CODE START ====================
// ⚠️ DO NOT REPLACE WITH console.log()
// This code sends logs via HTTP POST to the debug server
// ==================== DEBUG CODE START ====================
fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
projectPath: '/Users/username/my-project',
timestamp: new Date().toISOString(),
level: 'info',
message: 'Login button clicked',
data: { username, password }
})
}).catch(err => console.error('[Debug Log Failed]', err));
// ==================== DEBUG CODE END ====================
```
### read_debug_logs
Read debug logs from the project.
```json
{
"projectPath": "/Users/username/my-project",
"lastLines": 100
}
```
**Returns**: Array of log entries with timestamps, levels, messages, and data
### list_debug_blocks
List all debug blocks (code between debug-start and debug-end markers) in project files.
```json
{
"projectPath": "/Users/username/my-project"
}
```
### clear_debug_logs
Clear all debug logs from the project log file.
```json
{
"projectPath": "/Users/username/my-project"
}
```
## Complete Debugging Session Example
Here's a complete example of how a debugging session should flow:
### Initial Problem Report
**User**: "My React app's login form isn't submitting. When I click the submit button, nothing happens."
### Step 1: Analysis Phase
**AI**: Let me analyze this issue and check your environment.
*Uses: `analyze_bug` and `detect_environment`*
"Based on your description, the possible causes are:
1. Event listener not attached to the submit button
2. Form validation preventing submission
3. Submit handler function has an error
I've detected you're using a Browser environment with React."
### Step 2: First Debug Code Injection
**AI**: I'll add a log to check if the button click event is firing.
*Uses: `get_debug_template` with projectPath and inserts the code*
```javascript
// AI inserts this into handleSubmit function
fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
projectPath: '/Users/john/my-app',
timestamp: new Date().toISOString(),
level: 'info',
message: 'Submit button clicked',
data: { formData, isValid }
})
}).catch(err => console.error('[Debug Log Failed]', err));
```
### Step 3: AI Provides Test Instructions
**AI**:
💭 **My hypothesis**: The submit button's click handler might not be attached, or there's an early return preventing the handler from executing.
📋 **Test steps**:
1. Restart your React development server
2. Open the login page in your browser
3. Fill in any required fields
4. Click the submit button
5. Check the browser console for any errors
✅ **Expected results**: If the handler is working, you should see a debug log entry indicating the button was clicked.
📢 **Please report back**: What happened when you clicked? Did you see any console errors?
### Step 4: User Reports Back
**User**: "I clicked the button but nothing happened. No errors in console though."
### Step 5: AI Analyzes Logs
**AI**: Let me check the debug logs to see what was captured.
*Uses: `read_debug_logs(projectPath="/Users/john/my-app")*
"I don't see any log entries for the button click. This confirms the click handler isn't being called at all."
### Step 6: Second Iteration
**AI**: The event listener isn't firing. Let me add a log to check if the component is mounting correctly, and also verify the event handler is attached.
*Uses: `get_debug_template` again and inserts more debug logs*
**AI**:
💭 **New hypothesis**: The component might not be mounting, or the JSX might have an error with the event handler binding.
📋 **Test steps**:
1. Refresh the page
2. Check if you see a "Login component mounted" log
3. Try clicking submit again
✅ **Expected results**: You should see the component mount log, which will tell us if the component loaded.
### Step 7: User Provides Unexpected Results
**User**: "I see the mount log! But when I click submit, I get 'handleSubmit is not defined' in the console."
### Step 8: AI Investigates Unexpected Result
**AI**: Ah! That's very helpful. The error "handleSubmit is not defined" means the function isn't being properly bound or imported. Let me check your component code...
[AI reads the file and identifies the issue]
"I found the problem! You have `onClick={handleSubmit}` but the function is defined as `const handleLoginSubmit = (...)`. The names don't match. Let me fix that."
### Step 9: Resolution
**AI**: I'll correct the function name in your JSX.
[AI fixes the code]
💡 **Root cause**: Function name mismatch - JSX references `handleSubmit` but the actual function is named `handleLoginSubmit`
📋 **Final test steps**:
1. Restart the server
2. Try submitting the form again
3. Verify it works correctly
✅ **Expected results**: The form should now submit successfully!
### Step 10: Auto-Cleanup (Including Visual Markers)
**User**: "It works! Thanks!"
**AI**: [Detects success confirmation and reviews all modifications]
Great! Let me clean up all debug code and temporary changes right away.
*Uses: `clear_debug_logs` and searches for all debug markers*
"✅ All debug logs have been cleared
✅ Debug code removed from:
- src/components/Login.js (lines 15-25) - HTTP POST logs
- src/api/auth.js (lines 42-50) - HTTP POST logs
✅ Temporary visual markers reverted:
- src/components/Login.js - Button label changed from 'TEST-LOGIN' back to 'Login'
- src/styles/button.css - Background color changed from 'red' back to 'blue'
- src/assets/test-image.png - Removed test image reference
Your app is now clean and ready to go!"
---
### Key Improvements in This Example:
1. ✅ **AI always explained its hypothesis** before adding code
2. ✅ **AI provided clear, numbered test steps**
3. ✅ **AI stated expected results** clearly
4. ✅ **AI asked for specific feedback** from the user
5. ✅ **AI used read_debug_logs** to verify what actually happened
6. ✅ **AI investigated unexpected results** (the function name error)
7. ✅ **AI iterated based on data** until finding the root cause
8. ✅ **AI auto-cleaned immediately** when user confirmed success - NO asking "Should I clean up?"
### Success Detection Triggers
AI should auto-cleanup when user says:
- ✅ "It works!"
- ✅ "Fixed!"
- ✅ "Success!"
- ✅ "Great!"
- ✅ "Thanks!"
- ✅ "Perfect!"
- ✅ "That solved it"
- ✅ "Working now"
AI should continue debugging when user says:
- ❌ "Still not working"
- ❌ "Same error"
- ❌ "Didn't help"
- ❌ "Nothing changed"
- ❌ "Getting a different error"
## Temporary Modification Marking
When making ANY temporary changes for debugging purposes, you MUST mark them clearly for cleanup:
### Types of Temporary Changes
#### 1. Debug Code (Automatic)
Already wrapped in `DEBUG CODE START/END` markers:
```javascript
// ==================== DEBUG CODE START ====================
fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query ... });
// ==================== DEBUG CODE END ====================
```
#### 2. Visual/Test Markers (Manual - MUST ADD)
**Button text changes**:
```javascript
// TEMPORARY DEBUG MARKER - WILL BE REVERTED
<button>TEST-LOGIN</button> // Changed from "Login"
// END TEMPORARY DEBUG MARKER
```
**Test images**:
```jsx
// TEMPORARY DEBUG MARKER - WILL BE REVERTED
<img src="/test-debug-image.png" alt="Testing visibility" />
// END TEMPORARY DEBUG MARKER
```
**Color highlights**:
```css
/* TEMPORARY DEBUG MARKER - WILL BE REVERTED */
.button { background-color: red; } /* Changed from blue */
/* END TEMPORARY DEBUG MARKER */
```
**Placeholder text**:
```javascript
// TEMPORARY DEBUG MARKER - WILL BE REVERTED
const label = "DEBUG MODE - Button at top"; // Changed from "Submit"
// END TEMPORARY DEBUG MARKER
```
**Test flags**:
```javascript
// TEMPORARY DEBUG MARKER - WILL BE REVERTED
const isDebugging = true; // Will be removed
// END TEMPORARY DEBUG MARKER
```
### Cleanup Checklist
When user confirms success, check and revert:
✅ **Debug Code**:
- [ ] Search for `DEBUG CODE START` and remove all blocks
- [ ] Clear debug logs using `clear_debug_logs`
✅ **Visual Markers**:
- [ ] Search for `TEMPORARY DEBUG MARKER` comments
- [ ] Revert button labels to original
- [ ] Remove test images
- [ ] Restore original colors/styles
- [ ] Remove placeholder text
- [ ] Delete test flags/variables
✅ **Verify**:
- [ ] App looks and behaves exactly as before debugging
- [ ] No debug-related comments left
- [ ] No test assets referenced
## Cross-Device Debugging
Debug mobile apps and other devices on your network by configuring the server host.
### When to Use Cross-Device Debugging
- 📱 **Mobile Web Apps**: Debug mobile browsers from your development machine
- 📲 **React Native Apps**: Test on physical devices while capturing logs
- 🌐 **Multiple Devices**: Test your web app on phones, tablets, and computers simultaneously
- 🏠 **Local Network Testing**: Test on devices without deploying to production
### Setup Guide
#### 1. Find Your Computer's LAN IP
**Windows**:
```cmd
ipconfig
```
Look for "IPv4 Address" → e.g., `192.168.1.100`
**Mac/Linux**:
```bash
ifconfig | grep "inet " | grep -v 127.0.0.1
# or
ip addr show | grep "inet " | grep -v 127.0.0.1
```
Look for "inet" → e.g., `192.168.1.100`
#### 2. Configure MCP Server
Update your MCP client config:
```json
{
"mcpServers": {
"debug-mcp": {
"command": "node",
"args": ["D:/work/debug-mcp/dist/index.js"],
"env": {
"DEBUG_HOST": "192.168.1.100" // Your LAN IP
// Note: Port is automatically assigned, no need to configure DEBUG_PORT
}
}
}
}
```
Or use `0.0.0.0` to accept connections from any device:
```json
{
"env": {
"DEBUG_HOST": "0.0.0.0"
// Note: Port is automatically assigned, no need to configure DEBUG_PORT
}
}
```
#### 3. Configure Firewall (if needed)
**Windows**:
```cmd
# Use get_server_port tool to get actual port, then replace PORT
netsh advfirewall firewall add rule name="Debug MCP" dir=in action=allow protocol=TCP localport=PORT
```
**Mac/Linux**:
```bash
# Usually not needed, but if you have a firewall:
# Use get_server_port tool to get actual port, then replace PORT
sudo ufw allow PORT/tcp
```
#### 4. Test Connection
From another device on your network:
```bash
# Use get_server_port tool to get actual port, then replace PORT
curl http://192.168.1.100:PORT/health
```
Should return: `{"status":"ok"}`
### Usage Example
**Scenario**: Debugging a mobile web app
1. **Configure server** with LAN IP: `DEBUG_HOST=192.168.1.100`
2. **AI generates debug code** with the correct URL:
```javascript
// Use get_server_port tool to get actual port, then replace PORT
fetch('http://192.168.1.100:PORT/api/log', {
method: 'POST',
body: JSON.stringify({ message: 'Button clicked' })
});
```
3. **Open your web app** on mobile device using: `http://192.168.1.100:3000`
4. **Test the app** on your phone - logs are sent to your computer
5. **AI reads logs** from your computer: `read_debug_logs(projectPath="/path/to/project")`
### Host Configuration Options
| DEBUG_HOST Value | Description | Use Case |
|-----------------|-------------|----------|
| `localhost` | Only local connections | Default, local debugging |
| `0.0.0.0` | Accept from any device | Flexible testing |
| `192.168.1.100` | Your specific LAN IP | Explicit, recommended for mobile |
| `127.0.0.1` | Localhost only | Same as localhost |
### Troubleshooting
**Cannot connect from mobile device**:
1. Verify devices are on the same network
2. Use `get_server_port` tool to query the actual port
3. Check firewall settings for that port
4. Confirm the MCP server is running
5. Test with curl: `curl http://YOUR_IP:PORT/health` (PORT is the actual port)
**Logs not appearing**:
1. Check the generated code uses the correct URL
2. Verify projectPath is set correctly
3. Check browser console for network errors
4. Ensure the device can reach your computer
## HTTP API Endpoints
The debug server runs an HTTP API server on an automatically assigned port. Use the `get_server_port` MCP tool to query the actual port and URL.
### POST /api/log
Receives debug log entries from running applications.
**Example**:
```bash
# Use get_server_port tool to query actual port, then replace PORT
curl -X POST http://localhost:PORT/api/log \
-H "Content-Type: application/json" \
-d '{
"projectPath": "/path/to/project",
"timestamp": "2025-01-03T10:30:00Z",
"level": "info",
"message": "Login button clicked",
"data": { "username": "test", "isLoggedIn": false }
}'
```
### GET /api/log
Retrieves debug logs.
```bash
# Use get_server_port tool to query actual port, then replace PORT
curl http://localhost:PORT/api/log?last=100&projectPath=/path/to/project
```
### DELETE /api/log
Clears all debug logs.
```bash
# Use get_server_port tool to query actual port, then replace PORT
curl -X DELETE http://localhost:PORT/api/log?projectPath=/path/to/project
```
### GET /api/stats
Gets log statistics.
```bash
# Use get_server_port tool to query actual port, then replace PORT
curl http://localhost:PORT/api/stats?projectPath=/path/to/project
```
### GET /health
Health check endpoint.
```bash
# Use get_server_port tool to query actual port, then replace PORT
curl http://localhost:PORT/health
```
## Environment-Specific Examples
### Browser / Node.js (18+)
```javascript
// Uses fetch API
fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
projectPath: '/path/to/project',
timestamp: new Date().toISOString(),
level: 'info',
message: 'Debug message',
data: { variable1, variable2 }
})
}).catch(err => console.error('[Debug Log Failed]', err));
```
### Node.js Legacy (v14-17)
```javascript
// Uses http module
const http = require('http');
const data = JSON.stringify({
projectPath: '/path/to/project',
timestamp: new Date().toISOString(),
level: 'info',
message: 'Debug message',
data: { variable1, variable2 }
});
// Use get_server_port tool to get actual port, then replace PORT
const req = http.request('http://localhost:PORT/api/log', {
method: 'POST',
headers: { 'Content-Type': 'application/json' }
});
req.write(data);
req.end();
```
### Python
```python
import requests
from datetime import datetime
try:
requests.post(
'http://localhost:PORT/api/log', # PORT is auto-assigned, use get_server_port tool
json={
'projectPath': '/path/to/project',
'timestamp': datetime.now().isoformat(),
'level': 'info',
'message': 'Debug message',
'data': {'variable1': variable1, 'variable2': variable2}
},
timeout=0.1
)
except Exception:
pass # Silent fail to not break main logic
```
### PHP
```php
$logData = json_encode([
'projectPath' => '/path/to/project',
'timestamp' => date('c'),
'level' => 'info',
'message' => 'Debug message',
'data' => ['variable1' => $variable1, 'variable2' => $variable2]
]);
// Use get_server_port tool to get actual port, then replace PORT
$ch = curl_init('http://localhost:PORT/api/log');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $logData);
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/json']);
curl_setopt($ch, CURLOPT_TIMEOUT_MS, 100);
curl_exec($ch);
curl_close($ch);
```
### WeChat Mini Program
```javascript
wx.request({
url: 'http://localhost:PORT/api/log', // PORT is auto-assigned, use get_server_port tool
method: 'POST',
data: {
projectPath: '/path/to/project',
timestamp: new Date().toISOString(),
level: 'info',
message: 'Debug message',
data: { variable1, variable2 }
},
fail: (err) => console.error('[Debug Log Failed]', err)
});
```
### Java
```java
// Requires DebugHttpClient.java utility class
// The tool will automatically detect if the utility exists and guide you to add it
try {
DebugHttpClient.sendLog(
"http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool
"Debug message",
new java.util.HashMap<String, Object>() {{
put("variable1", variable1);
put("variable2", variable2);
}},
"info"
);
} catch (Exception e) {
// Silent fail - do not interrupt main logic
}
```
### Android
```java
// Android: Network requests must be executed in a background thread
if (android.os.Looper.getMainLooper().getThread() == Thread.currentThread()) {
// We are on the main thread, execute in background thread
new Thread(() -> {
try {
DebugHttpClient.sendLog(
"http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool
"Debug message",
new java.util.HashMap<String, Object>() {{
put("variable1", variable1);
put("variable2", variable2);
}},
"info"
);
} catch (Exception e) {
// Silent fail
}
}).start();
} else {
// Already in background thread, execute directly
try {
DebugHttpClient.sendLog(
"http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool
"Debug message",
new java.util.HashMap<String, Object>() {{
put("variable1", variable1);
put("variable2", variable2);
}},
"info"
);
} catch (Exception e) {
// Silent fail
}
}
```
### Kotlin
```kotlin
// Kotlin: Use coroutines for async network requests (Android) or direct call (Java)
try {
// For Android: Use coroutine scope
// CoroutineScope(Dispatchers.IO).launch {
// DebugHttpClient.sendLog(...)
// }
// For standard Java: Direct call
DebugHttpClient.sendLog(
"http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool
"Debug message",
mapOf("variable1" to variable1, "variable2" to variable2),
"info"
)
} catch (e: Exception) {
// Silent fail - do not interrupt main logic
}
```
### Objective-C (iOS/macOS)
```objective-c
// Use get_server_port tool to get actual port, then replace PORT
NSURL *url = [NSURL URLWithString:@"http://localhost:PORT/api/log"];
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url];
[request setHTTPMethod:@"POST"];
[request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
[request setTimeoutInterval:0.1];
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"];
[formatter setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]];
NSString *timestamp = [formatter stringFromDate:[NSDate date]];
NSDictionary *logData = @{
@"timestamp": timestamp,
@"level": @"info",
@"message": @"Debug message",
@"data": @{@"variable1": variable1, @"variable2": variable2}
};
NSError *error;
NSData *jsonData = [NSJSONSerialization dataWithJSONObject:logData options:0 error:&error];
if (jsonData) {
[request setHTTPBody:jsonData];
NSURLSessionDataTask *task = [[NSURLSession sharedSession] dataTaskWithRequest:request
completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) {
// Silent fail - do not interrupt main logic
}];
[task resume];
}
```
## Backup Files
Before modifying any file, the tool creates a backup:
```
original.js.backup.1704288000000
```
These backups can be used to restore files if needed.
## Troubleshooting
### Debug logs not appearing
1. Use `get_server_port` tool to query the actual port
2. Verify the HTTP server is running: `curl http://localhost:PORT/health` (PORT is the actual port)
3. Check if the application can reach the server URL
4. Look for `[Debug Log Failed]` errors in the application console
### Environment detection fails
1. Manually specify the environment in `add_debug_logs`
2. Check that file content is not empty
3. Verify the file extension matches the environment
### Debug code not removed
1. Ensure `// debug-start` and `// debug-end` markers are present
2. Check file permissions
3. Use `list_debug_blocks` to see what will be removed
## Architecture
```
src/
├── index.ts # Entry point (MCP + HTTP servers)
├── mcp/
│ └── tools.ts # MCP tool implementations
├── http/
│ ├── server.ts # HTTP API server
│ └── log-handler.ts # Log file management
├── tools/
│ ├── analyze.ts # Bug analysis
│ ├── injector.ts # Debug code injection
│ ├── cleanup.ts # Debug code removal
│ └── test-steps.ts # Test step generation
├── adapters/
│ ├── index.ts # Environment adapters
│ └── detector.ts # Environment detection
└── utils/
├── parser.ts # Code parsing (AST)
└── file.ts # File operations
```
## Contributing
Contributions are welcome! Please feel free to submit issues or pull requests.
## License
MIT