Skip to main content
Glama

MCP Complete Implementation Guide

by saksham0712
TROUBLESHOOTING.md•21.3 kB
# MCP Implementation Troubleshooting Guide This comprehensive guide covers common issues, debugging techniques, and advanced configuration options for your MCP (Model Context Protocol) implementation. ## 🚨 Common Issues and Solutions ### Server Startup Issues #### Issue: "Module not found" errors **Symptoms:** ``` Error: Cannot find module '@modelcontextprotocol/sdk' Error: Cannot find module 'express' ``` **Solutions:** 1. **Install dependencies:** ```powershell npm install # or for Python: pip install -r requirements.txt ``` 2. **Check Node.js/Python version:** ```powershell node --version # Should be 18+ python --version # Should be 3.8+ ``` 3. **Clear cache and reinstall:** ```powershell # Node.js rm -rf node_modules package-lock.json npm install # Python rm -rf venv python -m venv venv .\venv\Scripts\Activate.ps1 pip install -r requirements.txt ``` #### Issue: Port already in use **Symptoms:** ``` Error: listen EADDRINUSE: address already in use :::3000 ``` **Solutions:** 1. **Find and kill process using the port:** ```powershell # Windows netstat -ano | findstr :3000 taskkill /PID <PID> /F # Linux/Mac lsof -ti:3000 | xargs kill ``` 2. **Use a different port:** ```powershell $env:PORT = 3001 node server.js ``` 3. **Check environment variables:** ```powershell # Make sure .env file doesn't have conflicting ports Get-Content .env | Select-String PORT ``` #### Issue: Permission denied errors **Symptoms:** ``` Error: EACCES: permission denied Error: Access is denied ``` **Solutions:** 1. **Run with appropriate permissions:** ```powershell # Windows (run as Administrator if needed) # Linux/Mac sudo chmod +x server.js ``` 2. **Check file ownership:** ```powershell # Linux/Mac ls -la server.js sudo chown $USER:$USER server.js ``` 3. **Use non-privileged ports (>1024):** ```powershell $env:PORT = 3000 # Instead of 80 or 443 ``` ### AI Client Integration Issues #### Issue: Claude Desktop not connecting **Symptoms:** - Claude Desktop shows no MCP tools - Connection timeout errors - Tools not appearing in chat interface **Solutions:** 1. **Check configuration file location:** ```powershell # Windows $configPath = "$env:APPDATA\Claude\claude_desktop_config.json" Test-Path $configPath # If it doesn't exist, create the directory New-Item -ItemType Directory -Path "$env:APPDATA\Claude" -Force ``` 2. **Validate JSON configuration:** ```powershell # Test JSON validity Get-Content $configPath | ConvertFrom-Json ``` 3. **Check file paths in configuration:** ```json { "mcpServers": { "custom-mcp-server": { "command": "node", "args": ["C:\\absolute\\path\\to\\server.js"], "cwd": "C:\\absolute\\path\\to\\project" } } } ``` 4. **Test server manually:** ```powershell # Test if server starts correctly node server.js # Should see "MCP Server started with stdio transport" ``` 5. **Enable debug logging:** ```json { "mcpServers": { "custom-mcp-server": { "command": "node", "args": ["server.js"], "env": { "DEBUG": "mcp:*", "NODE_ENV": "development" } } } } ``` #### Issue: ChatGPT integration not working **Symptoms:** - API key errors - Function calls not working - Timeout errors **Solutions:** 1. **Verify API key:** ```powershell # Check if API key is set $env:OPENAI_API_KEY # Test API key with curl curl -H "Authorization: Bearer $env:OPENAI_API_KEY" ` "https://api.openai.com/v1/models" ``` 2. **Check function definitions:** ```javascript // Make sure tool definitions match between MCP and OpenAI format const tools = [ { type: 'function', function: { name: 'read_file', // Must match MCP tool name exactly description: 'Read the contents of a file', parameters: { // Must match MCP tool parameters } } } ]; ``` 3. **Enable request logging:** ```javascript // Add to your ChatGPT proxy console.log('Calling OpenAI with:', { messages, tools }); const completion = await openai.chat.completions.create({...}); console.log('OpenAI response:', completion); ``` ### Tool Execution Issues #### Issue: Tools return errors or don't work **Symptoms:** ``` Error: Tool execution failed Error: Invalid arguments Error: Permission denied ``` **Solutions:** 1. **Test tools individually:** ```powershell # Use the CLI to test each tool node examples/generic-client/cli.js ``` 2. **Check tool arguments:** ```javascript // Add validation in your server async function readFile(filePath) { if (!filePath) { throw new Error('File path is required'); } if (!fs.existsSync(filePath)) { throw new Error(`File not found: ${filePath}`); } // ... rest of implementation } ``` 3. **Enable tool-level debugging:** ```javascript // Add logging to each tool console.log(`Executing tool: ${name} with args:`, args); try { const result = await toolFunction(args); console.log(`Tool ${name} result:`, result); return result; } catch (error) { console.error(`Tool ${name} error:`, error); throw error; } ``` 4. **File permission issues:** ```powershell # Check file permissions Get-Acl "path\to\file" # Grant permissions if needed icacls "path\to\file" /grant "$env:USERNAME":F ``` ## šŸ”§ Debugging Techniques ### Enable Debug Logging #### Node.js Server ```javascript // Add to server.js const DEBUG = process.env.DEBUG === 'true' || process.env.NODE_ENV === 'development'; function debug(message, ...args) { if (DEBUG) { console.log('[DEBUG]', new Date().toISOString(), message, ...args); } } // Use throughout your code debug('Tool called:', toolName, args); debug('Server starting on port:', port); ``` #### Python Server ```python # Add to server.py import logging import os # Configure logging log_level = os.getenv('LOG_LEVEL', 'INFO').upper() logging.basicConfig( level=getattr(logging, log_level), format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Use throughout your code logger.debug(f'Tool called: {tool_name} with args: {args}') logger.info(f'Server starting on port: {port}') ``` ### Network Debugging #### Check server connectivity ```powershell # Test if server is responding curl http://localhost:3000/health # Test with verbose output curl -v http://localhost:3000/health # Test from another machine curl -v http://YOUR_IP:3000/health ``` #### Monitor network traffic ```powershell # Windows - use Wireshark or built-in tools netstat -an | findstr 3000 # Check firewall Get-NetFirewallRule | Where-Object {$_.DisplayName -like "*3000*"} ``` ### Performance Debugging #### Monitor resource usage ```powershell # Windows Get-Process node | Select-Object CPU, WorkingSet, ProcessName # Or use built-in Node.js monitoring node --inspect server.js # Then open chrome://inspect in Chrome ``` #### Add performance timing ```javascript // Add to critical paths const start = Date.now(); // ... your code ... const duration = Date.now() - start; console.log(`Operation took ${duration}ms`); ``` ### Memory Debugging #### Monitor memory usage ```javascript // Add to server.js setInterval(() => { const used = process.memoryUsage(); console.log('Memory usage:', { rss: Math.round(used.rss / 1024 / 1024) + ' MB', heapTotal: Math.round(used.heapTotal / 1024 / 1024) + ' MB', heapUsed: Math.round(used.heapUsed / 1024 / 1024) + ' MB', }); }, 30000); // Every 30 seconds ``` ## āš™ļø Advanced Configuration ### Environment Variables #### Complete .env reference ```env # Server Configuration NODE_ENV=development PORT=3000 HOST=0.0.0.0 # Logging LOG_LEVEL=info LOG_FORMAT=json DEBUG=false # Security ALLOW_EXECUTE_COMMANDS=false ALLOWED_FILE_EXTENSIONS=.txt,.md,.json,.js,.py MAX_FILE_SIZE=10485760 MAX_EXECUTION_TIME=30000 # CORS ALLOWED_ORIGINS=http://localhost:3000,http://127.0.0.1:3000 # Rate Limiting RATE_LIMIT_WINDOW_MS=900000 RATE_LIMIT_MAX_REQUESTS=100 # API Keys (if needed) OPENAI_API_KEY=your_key_here ANTHROPIC_API_KEY=your_key_here # Database (if using) DATABASE_URL=your_database_url # External Services WEATHER_API_KEY=your_key_here NEWS_API_KEY=your_key_here # Monitoring SENTRY_DSN=your_sentry_dsn PROMETHEUS_PORT=9090 ``` ### Advanced Server Configuration #### Enhanced security settings ```javascript // Add to server.js const rateLimit = require('express-rate-limit'); const helmet = require('helmet'); const validator = require('validator'); // Security middleware app.use(helmet()); // Rate limiting const limiter = rateLimit({ windowMs: parseInt(process.env.RATE_LIMIT_WINDOW_MS) || 15 * 60 * 1000, // 15 minutes max: parseInt(process.env.RATE_LIMIT_MAX_REQUESTS) || 100, message: 'Too many requests from this IP, please try again later.' }); app.use('/mcp', limiter); // Input validation function validateFilePath(path) { if (!path || typeof path !== 'string') { throw new Error('Invalid file path'); } // Prevent directory traversal if (path.includes('..') || path.includes('~')) { throw new Error('Directory traversal not allowed'); } // Check allowed extensions const allowedExtensions = (process.env.ALLOWED_FILE_EXTENSIONS || '').split(','); if (allowedExtensions.length > 0) { const ext = require('path').extname(path); if (!allowedExtensions.includes(ext)) { throw new Error(`File extension ${ext} not allowed`); } } return true; } ``` #### Performance optimizations ```javascript // Add compression const compression = require('compression'); app.use(compression()); // Connection pooling for databases const pool = new Pool({ connectionString: process.env.DATABASE_URL, max: 20, idleTimeoutMillis: 30000, connectionTimeoutMillis: 2000, }); // Caching const NodeCache = require('node-cache'); const cache = new NodeCache({ stdTTL: 600 }); // 10 minute TTL function getCachedResult(key, fetchFunction) { const cached = cache.get(key); if (cached) return Promise.resolve(cached); return fetchFunction().then(result => { cache.set(key, result); return result; }); } ``` ### Monitoring and Observability #### Prometheus metrics ```javascript // Add to server.js const prometheus = require('prom-client'); // Create metrics const httpRequestDuration = new prometheus.Histogram({ name: 'http_request_duration_ms', help: 'Duration of HTTP requests in ms', labelNames: ['method', 'route', 'status_code'] }); const toolCalls = new prometheus.Counter({ name: 'mcp_tool_calls_total', help: 'Total number of MCP tool calls', labelNames: ['tool_name', 'status'] }); // Middleware to collect metrics app.use((req, res, next) => { const start = Date.now(); res.on('finish', () => { const duration = Date.now() - start; httpRequestDuration .labels(req.method, req.route?.path || req.path, res.statusCode) .observe(duration); }); next(); }); // Metrics endpoint app.get('/metrics', async (req, res) => { res.set('Content-Type', prometheus.register.contentType); res.end(await prometheus.register.metrics()); }); ``` #### Health check endpoint ```javascript // Enhanced health check app.get('/health', async (req, res) => { const health = { status: 'healthy', timestamp: new Date().toISOString(), uptime: process.uptime(), memory: process.memoryUsage(), version: require('./package.json').version, environment: process.env.NODE_ENV, dependencies: {} }; // Check database connection if (process.env.DATABASE_URL) { try { await pool.query('SELECT 1'); health.dependencies.database = 'healthy'; } catch (error) { health.dependencies.database = 'unhealthy'; health.status = 'degraded'; } } // Check external APIs if (process.env.OPENAI_API_KEY) { try { const response = await fetch('https://api.openai.com/v1/models', { headers: { Authorization: `Bearer ${process.env.OPENAI_API_KEY}` }, timeout: 5000 }); health.dependencies.openai = response.ok ? 'healthy' : 'unhealthy'; } catch (error) { health.dependencies.openai = 'unreachable'; health.status = 'degraded'; } } res.status(health.status === 'healthy' ? 200 : 503).json(health); }); ``` ### Database Integration #### PostgreSQL setup ```javascript // database.js const { Pool } = require('pg'); const pool = new Pool({ connectionString: process.env.DATABASE_URL, ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false }); // Tool execution logging async function logToolExecution(toolName, args, result, executionTime, userId = null) { const query = ` INSERT INTO tool_executions (tool_name, arguments, result, execution_time, user_id, created_at) VALUES ($1, $2, $3, $4, $5, NOW()) `; await pool.query(query, [ toolName, JSON.stringify(args), JSON.stringify(result), executionTime, userId ]); } // Usage analytics async function getToolUsageStats(days = 7) { const query = ` SELECT tool_name, COUNT(*) as executions, AVG(execution_time) as avg_time, MAX(execution_time) as max_time FROM tool_executions WHERE created_at >= NOW() - INTERVAL '${days} days' GROUP BY tool_name ORDER BY executions DESC `; const result = await pool.query(query); return result.rows; } ``` #### Database schema ```sql -- Create tables for logging and analytics CREATE TABLE tool_executions ( id SERIAL PRIMARY KEY, tool_name VARCHAR(255) NOT NULL, arguments JSONB, result JSONB, execution_time INTEGER, user_id VARCHAR(255), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); CREATE INDEX idx_tool_executions_tool_name ON tool_executions(tool_name); CREATE INDEX idx_tool_executions_created_at ON tool_executions(created_at); CREATE INDEX idx_tool_executions_user_id ON tool_executions(user_id); -- User sessions table CREATE TABLE user_sessions ( id SERIAL PRIMARY KEY, session_id VARCHAR(255) UNIQUE NOT NULL, user_id VARCHAR(255), client_type VARCHAR(100), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, last_activity TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); ``` ### Deployment Configurations #### Docker Compose ```yaml # docker-compose.yml version: '3.8' services: mcp-server: build: . ports: - "3000:3000" environment: - NODE_ENV=production - DATABASE_URL=postgresql://user:password@postgres:5432/mcpdb - REDIS_URL=redis://redis:6379 depends_on: - postgres - redis volumes: - ./logs:/app/logs restart: unless-stopped postgres: image: postgres:15-alpine environment: - POSTGRES_DB=mcpdb - POSTGRES_USER=user - POSTGRES_PASSWORD=password volumes: - postgres_data:/var/lib/postgresql/data restart: unless-stopped redis: image: redis:7-alpine command: redis-server --appendonly yes volumes: - redis_data:/data restart: unless-stopped prometheus: image: prom/prometheus:latest ports: - "9090:9090" volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml - prometheus_data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/etc/prometheus/console_libraries' - '--web.console.templates=/etc/prometheus/consoles' restart: unless-stopped grafana: image: grafana/grafana:latest ports: - "3001:3000" environment: - GF_SECURITY_ADMIN_PASSWORD=admin volumes: - grafana_data:/var/lib/grafana restart: unless-stopped volumes: postgres_data: redis_data: prometheus_data: grafana_data: ``` #### Kubernetes deployment ```yaml # k8s-deployment.yml apiVersion: apps/v1 kind: Deployment metadata: name: mcp-server spec: replicas: 3 selector: matchLabels: app: mcp-server template: metadata: labels: app: mcp-server spec: containers: - name: mcp-server image: your-registry/mcp-server:latest ports: - containerPort: 3000 env: - name: NODE_ENV value: "production" - name: DATABASE_URL valueFrom: secretKeyRef: name: mcp-secrets key: database-url resources: requests: memory: "256Mi" cpu: "250m" limits: memory: "512Mi" cpu: "500m" livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 30 periodSeconds: 10 readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 5 periodSeconds: 5 --- apiVersion: v1 kind: Service metadata: name: mcp-server-service spec: selector: app: mcp-server ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer ``` ## šŸ“Š Performance Tuning ### Node.js Optimization ```javascript // Use cluster mode for multiple processes const cluster = require('cluster'); const numCPUs = require('os').cpus().length; if (cluster.isMaster) { console.log(`Master ${process.pid} is running`); // Fork workers for (let i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit', (worker, code, signal) => { console.log(`Worker ${worker.process.pid} died`); cluster.fork(); // Restart worker }); } else { // Start server in worker process require('./server.js'); } ``` ### Memory Management ```javascript // Add memory monitoring and cleanup const MAX_MEMORY_USAGE = 512 * 1024 * 1024; // 512MB setInterval(() => { const memUsage = process.memoryUsage(); if (memUsage.heapUsed > MAX_MEMORY_USAGE) { console.warn('High memory usage detected:', memUsage); // Force garbage collection if available if (global.gc) { global.gc(); } // Clear caches cache.flushAll(); } }, 60000); // Check every minute ``` ### Load Testing ```javascript // load-test.js - Advanced load testing const autocannon = require('autocannon'); const instance = autocannon({ url: 'http://localhost:3000', connections: 10, // default pipelining: 1, // default duration: 10, // default requests: [ { method: 'GET', path: '/health' }, { method: 'POST', path: '/mcp/tools/get_system_info', headers: { 'content-type': 'application/json' }, body: JSON.stringify({ arguments: {} }) } ] }, (err, result) => { if (err) { console.error(err); } else { console.log(result); } }); // Handle termination process.once('SIGINT', () => { instance.stop(); }); ``` ## šŸ†˜ Getting Help ### Debugging Checklist 1. **āœ… Basic Setup** - [ ] Dependencies installed correctly - [ ] Environment variables set - [ ] Server starts without errors - [ ] Health endpoint responds 2. **āœ… Network Connectivity** - [ ] Port is not blocked by firewall - [ ] Server is binding to correct interface - [ ] AI client can reach the server - [ ] No proxy interference 3. **āœ… Tool Functionality** - [ ] Tools are registered correctly - [ ] Tool arguments are validated - [ ] File permissions are correct - [ ] Error handling is implemented 4. **āœ… AI Integration** - [ ] Configuration files are valid - [ ] API keys are correct - [ ] Tool definitions match between systems - [ ] Transport protocol is working ### Log Files to Check 1. **Application logs** - Console output - Log files in `logs/` directory - PM2 logs (if using PM2) 2. **System logs** - Windows Event Viewer - Linux: `/var/log/messages`, `/var/log/syslog` - Docker logs: `docker logs <container_id>` 3. **AI Client logs** - Claude Desktop console - Browser developer tools - OpenAI API logs ### Community Resources - **GitHub Issues**: Report bugs and feature requests - **Documentation**: Check the latest documentation - **Examples**: Review working examples in the repository - **Stack Overflow**: Search for similar issues ### Creating a Support Request When asking for help, include: 1. **Environment details:** ``` OS: Windows 11 / macOS 13 / Ubuntu 22.04 Node.js: v18.17.0 Python: 3.11.0 AI Client: Claude Desktop 0.5.0 ``` 2. **Error messages:** - Full error text - Stack traces - Log snippets 3. **Configuration:** - Relevant config files (with sensitive data removed) - Environment variables - Command line arguments 4. **Steps to reproduce:** - What you did - What you expected - What actually happened --- *This troubleshooting guide is continuously updated. If you encounter an issue not covered here, please contribute by submitting a pull request with the solution.*

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saksham0712/MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server