Skip to main content
Glama

Debug MCP

An intelligent debugging assistant built on the Model Context Protocol (MCP) that helps automate the debugging process by analyzing bugs, injecting debug logs via HTTP, and iteratively fixing issues based on real-time feedback.

Key Features

  • ๐Ÿ” Automated Bug Analysis: Analyzes bug descriptions and suggests possible causes

  • ๐ŸŒ Multi-Environment Support: Automatically detects and adapts to different runtime environments

  • ๐Ÿ“ HTTP-Based Logging: Sends debug logs via HTTP POST to a centralized server (NOT console.log)

  • ๐Ÿ”„ Iterative Debugging: Continues debugging based on user feedback until the issue is resolved

  • ๐Ÿงน Auto Cleanup: Removes all debug code automatically after the bug is fixed

  • ๐Ÿ“Š Project-Scoped Logs: Logs are stored per-project in {projectPath}/.debug/debug.log

How It Works

โš ๏ธ Important: This MCP server does NOT use console.log(). Instead, it injects code that sends logs via HTTP POST to a local debug server. The logs are then stored in your project directory at {projectPath}/.debug/debug.log.

Why HTTP-Based Logging?

  1. Centralized collection: All logs from different parts of your application are collected in one place

  2. Structured data: Logs are stored as JSON with timestamps, levels, and context

  3. AI-friendly: The AI can easily read and analyze logs via the read_debug_logs tool

  4. Project-scoped: Logs are stored in your project directory, not scattered across console outputs

Supported Environments

Environment

Description

Browser

Web applications using fetch API

Node.js

Server-side Node.js (18+) with native fetch

Node.js Legacy

Older Node.js versions using http module

React Native

Mobile apps using React Native

Electron (Main)

Electron main process (direct file write)

Electron (Renderer)

Electron renderer process (IPC)

WeChat Mini Program

WeChat/Alipay mini programs (wx.request)

PHP

Server-side PHP (curl)

Python

Server-side Python (requests library)

Java

Server-side Java using DebugHttpClient utility

Android

Android apps with thread-safe network requests

Kotlin

Kotlin applications with coroutine support

Objective-C

iOS/macOS applications using NSURLSession

Installation

# Clone the repository git clone https://gitee.com/UPUP0326/debug-mcp.git cd debug-mcp # Install dependencies npm install # Build the project npm run build # Start the server npm start

Configuration

Port Configuration (Optional)

Important: The HTTP server port is now automatically assigned by the system. No manual configuration is needed. Each MCP instance automatically gets an available port to avoid port conflicts.

If you need to use a fixed port (not recommended), you can set it in environment variables:

# HTTP Server Port (optional, auto-assigned by default) # If set, this port will be used; if not set, system automatically assigns an available port DEBUG_PORT=37373 # HTTP Server Host (default: localhost) # Use '0.0.0.0' to accept connections from any device on your network # Use your LAN IP (e.g., '192.168.1.100') to allow other devices to send logs DEBUG_HOST=localhost # Examples: # DEBUG_HOST=0.0.0.0 # Accept connections from any device # DEBUG_HOST=192.168.1.100 # Your computer's LAN IP # DEBUG_HOST=localhost # Only local connections (default) # Log file path (relative to project directory) LOG_FILE=.debug/debug.log

Get Actual Port: Use the get_server_port MCP tool to query the currently assigned port and URL.

Quick Start Guide

Step 1: Start the MCP Server

npm start

The server will start:

  • MCP Server: Listening on stdio for AI communication

  • HTTP API Server: Automatically assigned an available port for debug logs (port info can be queried via get_server_port tool)

Step 2: Configure MCP Client

Cursor IDE Configuration

In Cursor, MCP server configuration is in settings. Open Cursor settings, find the MCP configuration section, and add:

{ "mcpServers": { "debug-mcp": { "command": "node", "args": ["E:/work/debug-mcp/dist/index.js"], "env": { "DEBUG_HOST": "localhost" } } } }

Note:

  • Replace E:/work/debug-mcp/dist/index.js with your actual path

  • Port is automatically assigned, no need to configure DEBUG_PORT

  • For cross-device debugging, set DEBUG_HOST to 0.0.0.0 or your LAN IP

Claude Desktop Configuration

In Claude Desktop, the config file location:

  • Windows: %APPDATA%\Claude\claude_desktop_config.json

  • Mac: ~/Library/Application Support/Claude/claude_desktop_config.json

  • Linux: ~/.config/Claude/claude_desktop_config.json

Add the following configuration:

{ "mcpServers": { "debug-mcp": { "command": "node", "args": ["E:/work/debug-mcp/dist/index.js"], "env": { "DEBUG_HOST": "localhost" } } } }

Note:

  • Replace E:/work/debug-mcp/dist/index.js with your actual path

  • Port is automatically assigned, no need to configure DEBUG_PORT

  • For cross-device debugging, set DEBUG_HOST to 0.0.0.0 or your LAN IP

Cross-Device Debugging Setup:

To enable debugging from mobile devices or other computers on your network:

  1. Find your LAN IP:

    • Windows: ipconfig โ†’ look for "IPv4 Address" (e.g., 192.168.1.100)

    • Mac/Linux: ifconfig or ip addr โ†’ look for "inet" (e.g., 192.168.1.100)

  2. Update MCP config:

    { "env": { "DEBUG_HOST": "192.168.1.100" // Your LAN IP // Note: Port is automatically assigned, no need to configure DEBUG_PORT } }
  3. Get actual port: After starting the MCP server, use the get_server_port tool to query the actual assigned port

  4. Ensure firewall allows that port (port number queried via get_server_port)

  5. Devices can now send logs to http://192.168.1.100:PORT/api/log (PORT is the actual assigned port)

Step 3: Use with AI

When debugging, simply describe the bug to the AI. The AI will:

  1. Analyze the bug using analyze_bug tool

  2. Detect the environment of your files using detect_environment

  3. Get debug template using get_debug_template tool

  4. Manually insert the debug code into your files

Important for AI:

  • โš ๏ธ DO NOT use console.log() - use the template code which sends logs via HTTP POST

  • โš ๏ธ ALWAYS provide projectPath as the absolute path to the project directory

  • The debug code sends logs to the dynamically assigned server URL (use get_server_port tool to get the actual URL)

  • For cross-device debugging, the URL will use the configured DEBUG_HOST and automatically assigned port

  • Logs are stored at {projectPath}/.debug/debug.log

Example Workflow

User: "My login button doesn't work when I click it" AI Process: 1. analyze_bug("Login button doesn't work") โ†’ Returns: Possible causes (event listener, API error, validation) 2. detect_environment("src/components/Login.js") โ†’ Returns: "browser" 3. get_debug_template( environment="browser", logMessage="Login button clicked", variables=["username", "password"], projectPath="/path/to/project" ) โ†’ Returns: HTTP-based debug code (NOT console.log) 4. AI inserts the debug code into Login.js manually โš ๏ธ CRITICAL STEP - AI informs the user: ๐Ÿ’ญ My hypothesis: The button's click event listener might not be attached properly. ๐Ÿ“‹ Test steps: 1. Restart your application 2. Navigate to the login page 3. Click the login button 4. Check if anything happens โœ… Expected results: You should see the button click logged in the debug system ๐Ÿ“ข Please report back: Did the button respond? Any error messages in console? 5. User tests and reports: "Nothing happened when I clicked" 6. read_debug_logs(projectPath="/path/to/project") โ†’ Returns: No log entries for button click 7. AI analyzes: "Since we don't see the button click log, the event listener isn't firing. Let me add more logs to check if the component mounted..." 8. [Repeat debugging cycle with targeted hypotheses] 9. User reports: "Now I see the log! But getting 401 error" 10. AI analyzes logs and suggests: "The 401 error indicates authentication failed. Let me check the API request format..." 11. [Continue until bug is fixed]

Key AI Behaviors

When using this MCP server, the AI should follow this pattern:

โœ… DO:

  1. Always provide projectPath as the absolute path to the project directory

  2. Use get_debug_template to get HTTP-based logging code (NOT console.log)

  3. Manually insert the debug code at the appropriate location

  4. Explain your hypothesis - what you think might be wrong

  5. Provide clear test steps - exact actions the user should take

  6. State expected results - what should happen if your hypothesis is correct

  7. Ask for feedback - specifically request user to report results

  8. Use read_debug_logs after user tests to analyze actual runtime data

  9. Handle unexpected results - ask follow-up questions when results differ from expectations

  10. Iterate - continue debugging based on data until the issue is resolved

  11. Auto-cleanup on success - When user says "It works!", "Fixed!", "Success!", etc., IMMEDIATELY:

    • Use clear_debug_logs to wipe the log file

    • Remove all debug code blocks (search for DEBUG CODE START/END markers)

    • Revert ALL temporary visual/test markers (search for TEMPORARY DEBUG MARKER comments)

    • Restore original code (button labels, images, styles, colors, etc.)

    • DO NOT ask "Should I clean up?" - just do it directly

  12. Track ALL modifications - Keep a list of every change:

    • Debug code blocks (HTTP POST logs)

    • Visual markers (test images, button text changes, color highlights)

    • Style modifications (CSS changes for testing)

    • Any other temporary changes

โŒ DON'T:

  1. โŒ Use console.log() - always use the provided HTTP POST templates

  2. โŒ Omit projectPath - logs will go to wrong directory

  3. โŒ Skip explaining your reasoning - user needs to understand your hypothesis

  4. โŒ Forget test steps - user needs clear instructions

  5. โŒ Ignore unexpected results - investigate when things don't work as planned

  6. โŒ Ask "Should I clean up?" when user confirms success - just clean up directly

  7. โŒ Forget temporary visual markers - ALL test changes must be reverted

Available MCP Tools

get_server_info

Get server configuration, HTTP endpoints, and supported environments.

{}

get_server_port

Get the current HTTP server port and URL information. The port is automatically assigned by the system. Use this tool to query the actual assigned port number and complete server URL.

{}

Returns:

  • port: Currently assigned port number

  • host: Server host address

  • url: Complete log endpoint URL

  • baseUrl: Server base URL

  • endpoints: All available API endpoints

Use Cases:

  • When you need to know the actual port number

  • For cross-device debugging, need to inform other devices of the URL

  • To verify the server has started correctly

analyze_bug

Analyze a bug description and get intelligent suggestions about possible causes.

{ "bugDescription": "Login button doesn't respond when clicked", "files": ["login.js", "auth.js"] }

detect_environment

Automatically detect the runtime environment of a file.

{ "filePath": "src/components/Login.js" }

Returns: Environment type (browser, node, python, etc.) with confidence level

get_debug_template

Get debug code template for a specific environment. This is the main tool for adding debug logs.

{ "environment": "browser", "logMessage": "Login button clicked", "variables": ["username", "password"], "projectPath": "/Users/username/my-project", "level": "info" }

โš ๏ธ IMPORTANT:

  • The returned code uses HTTP POST, NOT console.log()

  • projectPath is REQUIRED - use the absolute path to your project directory

  • DO NOT modify the generated code to use console.log()

  • Manually insert the code into your file at the appropriate location

Example returned code:

// ==================== DEBUG CODE START ==================== // โš ๏ธ DO NOT REPLACE WITH console.log() // This code sends logs via HTTP POST to the debug server // ==================== DEBUG CODE START ==================== fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ projectPath: '/Users/username/my-project', timestamp: new Date().toISOString(), level: 'info', message: 'Login button clicked', data: { username, password } }) }).catch(err => console.error('[Debug Log Failed]', err)); // ==================== DEBUG CODE END ====================

read_debug_logs

Read debug logs from the project.

{ "projectPath": "/Users/username/my-project", "lastLines": 100 }

Returns: Array of log entries with timestamps, levels, messages, and data

list_debug_blocks

List all debug blocks (code between debug-start and debug-end markers) in project files.

{ "projectPath": "/Users/username/my-project" }

clear_debug_logs

Clear all debug logs from the project log file.

{ "projectPath": "/Users/username/my-project" }

Complete Debugging Session Example

Here's a complete example of how a debugging session should flow:

Initial Problem Report

User: "My React app's login form isn't submitting. When I click the submit button, nothing happens."

Step 1: Analysis Phase

AI: Let me analyze this issue and check your environment.

Uses:

"Based on your description, the possible causes are:

  1. Event listener not attached to the submit button

  2. Form validation preventing submission

  3. Submit handler function has an error

I've detected you're using a Browser environment with React."

Step 2: First Debug Code Injection

AI: I'll add a log to check if the button click event is firing.

Uses:

// AI inserts this into handleSubmit function fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ projectPath: '/Users/john/my-app', timestamp: new Date().toISOString(), level: 'info', message: 'Submit button clicked', data: { formData, isValid } }) }).catch(err => console.error('[Debug Log Failed]', err));

Step 3: AI Provides Test Instructions

AI: ๐Ÿ’ญ My hypothesis: The submit button's click handler might not be attached, or there's an early return preventing the handler from executing.

๐Ÿ“‹ Test steps:

  1. Restart your React development server

  2. Open the login page in your browser

  3. Fill in any required fields

  4. Click the submit button

  5. Check the browser console for any errors

โœ… Expected results: If the handler is working, you should see a debug log entry indicating the button was clicked.

๐Ÿ“ข Please report back: What happened when you clicked? Did you see any console errors?

Step 4: User Reports Back

User: "I clicked the button but nothing happened. No errors in console though."

Step 5: AI Analyzes Logs

AI: Let me check the debug logs to see what was captured.

Uses: `read_debug_logs(projectPath="/Users/john/my-app")

"I don't see any log entries for the button click. This confirms the click handler isn't being called at all."

Step 6: Second Iteration

AI: The event listener isn't firing. Let me add a log to check if the component is mounting correctly, and also verify the event handler is attached.

Uses:

AI: ๐Ÿ’ญ New hypothesis: The component might not be mounting, or the JSX might have an error with the event handler binding.

๐Ÿ“‹ Test steps:

  1. Refresh the page

  2. Check if you see a "Login component mounted" log

  3. Try clicking submit again

โœ… Expected results: You should see the component mount log, which will tell us if the component loaded.

Step 7: User Provides Unexpected Results

User: "I see the mount log! But when I click submit, I get 'handleSubmit is not defined' in the console."

Step 8: AI Investigates Unexpected Result

AI: Ah! That's very helpful. The error "handleSubmit is not defined" means the function isn't being properly bound or imported. Let me check your component code...

[AI reads the file and identifies the issue]

"I found the problem! You have onClick={handleSubmit} but the function is defined as const handleLoginSubmit = (...). The names don't match. Let me fix that."

Step 9: Resolution

AI: I'll correct the function name in your JSX.

[AI fixes the code]

๐Ÿ’ก Root cause: Function name mismatch - JSX references handleSubmit but the actual function is named handleLoginSubmit

๐Ÿ“‹ Final test steps:

  1. Restart the server

  2. Try submitting the form again

  3. Verify it works correctly

โœ… Expected results: The form should now submit successfully!

Step 10: Auto-Cleanup (Including Visual Markers)

User: "It works! Thanks!"

AI: [Detects success confirmation and reviews all modifications]

Great! Let me clean up all debug code and temporary changes right away.

Uses:

"โœ… All debug logs have been cleared

โœ… Debug code removed from:

  • src/components/Login.js (lines 15-25) - HTTP POST logs

  • src/api/auth.js (lines 42-50) - HTTP POST logs

โœ… Temporary visual markers reverted:

  • src/components/Login.js - Button label changed from 'TEST-LOGIN' back to 'Login'

  • src/styles/button.css - Background color changed from 'red' back to 'blue'

  • src/assets/test-image.png - Removed test image reference

Your app is now clean and ready to go!"


Key Improvements in This Example:

  1. โœ… AI always explained its hypothesis before adding code

  2. โœ… AI provided clear, numbered test steps

  3. โœ… AI stated expected results clearly

  4. โœ… AI asked for specific feedback from the user

  5. โœ… AI used read_debug_logs to verify what actually happened

  6. โœ… AI investigated unexpected results (the function name error)

  7. โœ… AI iterated based on data until finding the root cause

  8. โœ… AI auto-cleaned immediately when user confirmed success - NO asking "Should I clean up?"

Success Detection Triggers

AI should auto-cleanup when user says:

  • โœ… "It works!"

  • โœ… "Fixed!"

  • โœ… "Success!"

  • โœ… "Great!"

  • โœ… "Thanks!"

  • โœ… "Perfect!"

  • โœ… "That solved it"

  • โœ… "Working now"

AI should continue debugging when user says:

  • โŒ "Still not working"

  • โŒ "Same error"

  • โŒ "Didn't help"

  • โŒ "Nothing changed"

  • โŒ "Getting a different error"

Temporary Modification Marking

When making ANY temporary changes for debugging purposes, you MUST mark them clearly for cleanup:

Types of Temporary Changes

1. Debug Code (Automatic)

Already wrapped in DEBUG CODE START/END markers:

// ==================== DEBUG CODE START ==================== fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query ... }); // ==================== DEBUG CODE END ====================

2. Visual/Test Markers (Manual - MUST ADD)

Button text changes:

// TEMPORARY DEBUG MARKER - WILL BE REVERTED <button>TEST-LOGIN</button> // Changed from "Login" // END TEMPORARY DEBUG MARKER

Test images:

// TEMPORARY DEBUG MARKER - WILL BE REVERTED <img src="/test-debug-image.png" alt="Testing visibility" /> // END TEMPORARY DEBUG MARKER

Color highlights:

/* TEMPORARY DEBUG MARKER - WILL BE REVERTED */ .button { background-color: red; } /* Changed from blue */ /* END TEMPORARY DEBUG MARKER */

Placeholder text:

// TEMPORARY DEBUG MARKER - WILL BE REVERTED const label = "DEBUG MODE - Button at top"; // Changed from "Submit" // END TEMPORARY DEBUG MARKER

Test flags:

// TEMPORARY DEBUG MARKER - WILL BE REVERTED const isDebugging = true; // Will be removed // END TEMPORARY DEBUG MARKER

Cleanup Checklist

When user confirms success, check and revert:

โœ… Debug Code:

  • Search for DEBUG CODE START and remove all blocks

  • Clear debug logs using clear_debug_logs

โœ… Visual Markers:

  • Search for TEMPORARY DEBUG MARKER comments

  • Revert button labels to original

  • Remove test images

  • Restore original colors/styles

  • Remove placeholder text

  • Delete test flags/variables

โœ… Verify:

  • App looks and behaves exactly as before debugging

  • No debug-related comments left

  • No test assets referenced

Cross-Device Debugging

Debug mobile apps and other devices on your network by configuring the server host.

When to Use Cross-Device Debugging

  • ๐Ÿ“ฑ Mobile Web Apps: Debug mobile browsers from your development machine

  • ๐Ÿ“ฒ React Native Apps: Test on physical devices while capturing logs

  • ๐ŸŒ Multiple Devices: Test your web app on phones, tablets, and computers simultaneously

  • ๐Ÿ  Local Network Testing: Test on devices without deploying to production

Setup Guide

1. Find Your Computer's LAN IP

Windows:

ipconfig

Look for "IPv4 Address" โ†’ e.g., 192.168.1.100

Mac/Linux:

ifconfig | grep "inet " | grep -v 127.0.0.1 # or ip addr show | grep "inet " | grep -v 127.0.0.1

Look for "inet" โ†’ e.g., 192.168.1.100

2. Configure MCP Server

Update your MCP client config:

{ "mcpServers": { "debug-mcp": { "command": "node", "args": ["D:/work/debug-mcp/dist/index.js"], "env": { "DEBUG_HOST": "192.168.1.100" // Your LAN IP // Note: Port is automatically assigned, no need to configure DEBUG_PORT } } } }

Or use 0.0.0.0 to accept connections from any device:

{ "env": { "DEBUG_HOST": "0.0.0.0" // Note: Port is automatically assigned, no need to configure DEBUG_PORT } }

3. Configure Firewall (if needed)

Windows:

# Use get_server_port tool to get actual port, then replace PORT netsh advfirewall firewall add rule name="Debug MCP" dir=in action=allow protocol=TCP localport=PORT

Mac/Linux:

# Usually not needed, but if you have a firewall: # Use get_server_port tool to get actual port, then replace PORT sudo ufw allow PORT/tcp

4. Test Connection

From another device on your network:

# Use get_server_port tool to get actual port, then replace PORT curl http://192.168.1.100:PORT/health

Should return: {"status":"ok"}

Usage Example

Scenario: Debugging a mobile web app

  1. Configure server with LAN IP: DEBUG_HOST=192.168.1.100

  2. AI generates debug code with the correct URL:

    // Use get_server_port tool to get actual port, then replace PORT fetch('http://192.168.1.100:PORT/api/log', { method: 'POST', body: JSON.stringify({ message: 'Button clicked' }) });
  3. Open your web app on mobile device using: http://192.168.1.100:3000

  4. Test the app on your phone - logs are sent to your computer

  5. AI reads logs from your computer: read_debug_logs(projectPath="/path/to/project")

Host Configuration Options

DEBUG_HOST Value

Description

Use Case

localhost

Only local connections

Default, local debugging

0.0.0.0

Accept from any device

Flexible testing

192.168.1.100

Your specific LAN IP

Explicit, recommended for mobile

127.0.0.1

Localhost only

Same as localhost

Troubleshooting

Cannot connect from mobile device:

  1. Verify devices are on the same network

  2. Use get_server_port tool to query the actual port

  3. Check firewall settings for that port

  4. Confirm the MCP server is running

  5. Test with curl: curl http://YOUR_IP:PORT/health (PORT is the actual port)

Logs not appearing:

  1. Check the generated code uses the correct URL

  2. Verify projectPath is set correctly

  3. Check browser console for network errors

  4. Ensure the device can reach your computer

HTTP API Endpoints

The debug server runs an HTTP API server on an automatically assigned port. Use the get_server_port MCP tool to query the actual port and URL.

POST /api/log

Receives debug log entries from running applications.

Example:

# Use get_server_port tool to query actual port, then replace PORT curl -X POST http://localhost:PORT/api/log \ -H "Content-Type: application/json" \ -d '{ "projectPath": "/path/to/project", "timestamp": "2025-01-03T10:30:00Z", "level": "info", "message": "Login button clicked", "data": { "username": "test", "isLoggedIn": false } }'

GET /api/log

Retrieves debug logs.

# Use get_server_port tool to query actual port, then replace PORT curl http://localhost:PORT/api/log?last=100&projectPath=/path/to/project

DELETE /api/log

Clears all debug logs.

# Use get_server_port tool to query actual port, then replace PORT curl -X DELETE http://localhost:PORT/api/log?projectPath=/path/to/project

GET /api/stats

Gets log statistics.

# Use get_server_port tool to query actual port, then replace PORT curl http://localhost:PORT/api/stats?projectPath=/path/to/project

GET /health

Health check endpoint.

# Use get_server_port tool to query actual port, then replace PORT curl http://localhost:PORT/health

Environment-Specific Examples

Browser / Node.js (18+)

// Uses fetch API fetch('http://localhost:PORT/api/log', { // PORT is auto-assigned, use get_server_port tool to query method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ projectPath: '/path/to/project', timestamp: new Date().toISOString(), level: 'info', message: 'Debug message', data: { variable1, variable2 } }) }).catch(err => console.error('[Debug Log Failed]', err));

Node.js Legacy (v14-17)

// Uses http module const http = require('http'); const data = JSON.stringify({ projectPath: '/path/to/project', timestamp: new Date().toISOString(), level: 'info', message: 'Debug message', data: { variable1, variable2 } }); // Use get_server_port tool to get actual port, then replace PORT const req = http.request('http://localhost:PORT/api/log', { method: 'POST', headers: { 'Content-Type': 'application/json' } }); req.write(data); req.end();

Python

import requests from datetime import datetime try: requests.post( 'http://localhost:PORT/api/log', # PORT is auto-assigned, use get_server_port tool json={ 'projectPath': '/path/to/project', 'timestamp': datetime.now().isoformat(), 'level': 'info', 'message': 'Debug message', 'data': {'variable1': variable1, 'variable2': variable2} }, timeout=0.1 ) except Exception: pass # Silent fail to not break main logic

PHP

$logData = json_encode([ 'projectPath' => '/path/to/project', 'timestamp' => date('c'), 'level' => 'info', 'message' => 'Debug message', 'data' => ['variable1' => $variable1, 'variable2' => $variable2] ]); // Use get_server_port tool to get actual port, then replace PORT $ch = curl_init('http://localhost:PORT/api/log'); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $logData); curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/json']); curl_setopt($ch, CURLOPT_TIMEOUT_MS, 100); curl_exec($ch); curl_close($ch);

WeChat Mini Program

wx.request({ url: 'http://localhost:PORT/api/log', // PORT is auto-assigned, use get_server_port tool method: 'POST', data: { projectPath: '/path/to/project', timestamp: new Date().toISOString(), level: 'info', message: 'Debug message', data: { variable1, variable2 } }, fail: (err) => console.error('[Debug Log Failed]', err) });

Java

// Requires DebugHttpClient.java utility class // The tool will automatically detect if the utility exists and guide you to add it try { DebugHttpClient.sendLog( "http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool "Debug message", new java.util.HashMap<String, Object>() {{ put("variable1", variable1); put("variable2", variable2); }}, "info" ); } catch (Exception e) { // Silent fail - do not interrupt main logic }

Android

// Android: Network requests must be executed in a background thread if (android.os.Looper.getMainLooper().getThread() == Thread.currentThread()) { // We are on the main thread, execute in background thread new Thread(() -> { try { DebugHttpClient.sendLog( "http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool "Debug message", new java.util.HashMap<String, Object>() {{ put("variable1", variable1); put("variable2", variable2); }}, "info" ); } catch (Exception e) { // Silent fail } }).start(); } else { // Already in background thread, execute directly try { DebugHttpClient.sendLog( "http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool "Debug message", new java.util.HashMap<String, Object>() {{ put("variable1", variable1); put("variable2", variable2); }}, "info" ); } catch (Exception e) { // Silent fail } }

Kotlin

// Kotlin: Use coroutines for async network requests (Android) or direct call (Java) try { // For Android: Use coroutine scope // CoroutineScope(Dispatchers.IO).launch { // DebugHttpClient.sendLog(...) // } // For standard Java: Direct call DebugHttpClient.sendLog( "http://localhost:PORT/api/log", // PORT is auto-assigned, use get_server_port tool "Debug message", mapOf("variable1" to variable1, "variable2" to variable2), "info" ) } catch (e: Exception) { // Silent fail - do not interrupt main logic }

Objective-C (iOS/macOS)

// Use get_server_port tool to get actual port, then replace PORT NSURL *url = [NSURL URLWithString:@"http://localhost:PORT/api/log"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request setHTTPMethod:@"POST"]; [request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [request setTimeoutInterval:0.1]; NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"]; [formatter setTimeZone:[NSTimeZone timeZoneWithName:@"UTC"]]; NSString *timestamp = [formatter stringFromDate:[NSDate date]]; NSDictionary *logData = @{ @"timestamp": timestamp, @"level": @"info", @"message": @"Debug message", @"data": @{@"variable1": variable1, @"variable2": variable2} }; NSError *error; NSData *jsonData = [NSJSONSerialization dataWithJSONObject:logData options:0 error:&error]; if (jsonData) { [request setHTTPBody:jsonData]; NSURLSessionDataTask *task = [[NSURLSession sharedSession] dataTaskWithRequest:request completionHandler:^(NSData *data, NSURLResponse *response, NSError *error) { // Silent fail - do not interrupt main logic }]; [task resume]; }

Backup Files

Before modifying any file, the tool creates a backup:

original.js.backup.1704288000000

These backups can be used to restore files if needed.

Troubleshooting

Debug logs not appearing

  1. Use get_server_port tool to query the actual port

  2. Verify the HTTP server is running: curl http://localhost:PORT/health (PORT is the actual port)

  3. Check if the application can reach the server URL

  4. Look for [Debug Log Failed] errors in the application console

Environment detection fails

  1. Manually specify the environment in add_debug_logs

  2. Check that file content is not empty

  3. Verify the file extension matches the environment

Debug code not removed

  1. Ensure // debug-start and // debug-end markers are present

  2. Check file permissions

  3. Use list_debug_blocks to see what will be removed

Architecture

src/ โ”œโ”€โ”€ index.ts # Entry point (MCP + HTTP servers) โ”œโ”€โ”€ mcp/ โ”‚ โ””โ”€โ”€ tools.ts # MCP tool implementations โ”œโ”€โ”€ http/ โ”‚ โ”œโ”€โ”€ server.ts # HTTP API server โ”‚ โ””โ”€โ”€ log-handler.ts # Log file management โ”œโ”€โ”€ tools/ โ”‚ โ”œโ”€โ”€ analyze.ts # Bug analysis โ”‚ โ”œโ”€โ”€ injector.ts # Debug code injection โ”‚ โ”œโ”€โ”€ cleanup.ts # Debug code removal โ”‚ โ””โ”€โ”€ test-steps.ts # Test step generation โ”œโ”€โ”€ adapters/ โ”‚ โ”œโ”€โ”€ index.ts # Environment adapters โ”‚ โ””โ”€โ”€ detector.ts # Environment detection โ””โ”€โ”€ utils/ โ”œโ”€โ”€ parser.ts # Code parsing (AST) โ””โ”€โ”€ file.ts # File operations

Contributing

Contributions are welcome! Please feel free to submit issues or pull requests.

License

MIT

-
security - not tested
A
license - permissive license
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ahao0150/debug-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server