Skip to main content
Glama

MCP Prompt Enhancer

by soniankur948

MCP Prompt Enhancer

A Model Context Protocol (MCP) server for intelligent prompt preprocessing with adaptive context management.

Overview

The MCP Prompt Enhancer intelligently analyzes your codebase and enhances LLM prompts with relevant context to produce higher quality AI responses. It features:

  • Two-tier Context System: Initial full project analysis combined with task-focused context tracking
  • Adaptive Context Intelligence: Only provides relevant context based on the current task
  • Context Caching: Reuses analyzed project information instead of reprocessing
  • Task Detection: Automatically infers task type from prompts (page creation, debugging, refactoring, etc.)
  • Git Integration: Includes recent changes and commit history when relevant
  • Framework Detection: Auto-detects React, Vue, Next.js, etc. and includes relevant patterns

Installation

# Install globally npm install -g mcp-prompt-enhancer # Or install in your project npm install --save-dev mcp-prompt-enhancer

Quick Start

The MCP Prompt Enhancer now uses STDIO for communication instead of a network server:

# Process a prompt enhancement request echo '{"action":"enhance_prompt","params":{"prompt":"Fix errors in the app.js file","taskType":"debugging"}}' | node dist/index.js # Refresh project context echo '{"action":"set_project_context","params":{"forceRefresh":true}}' | node dist/index.js

The server analyzes the current directory by default and responds with JSON output.

Usage with Claude

The MCP Prompt Enhancer now uses STDIO for communication, making it easier to integrate with Claude. This approach eliminates network dependencies and simplifies the integration process.

Using with Claude

To use the MCP Prompt Enhancer with Claude:

  1. Start the server in a background process:
node dist/index.js
  1. Send JSON requests to the server via STDIO:
echo '{"action":"enhance_prompt","params":{"prompt":"Create a new React component for user profile","taskType":"creation"}}' | node dist/index.js
  1. Parse the JSON response to extract the enhanced prompt.

Claude Integration Example

Here's how to integrate with Claude using a wrapper script:

// claude-mcp-enhancer.js const { spawn } = require('child_process'); const path = require('path'); async function enhancePrompt(prompt, options = {}) { return new Promise((resolve, reject) => { const serverPath = path.join(__dirname, 'dist/index.js'); const server = spawn('node', [serverPath]); let output = ''; server.stdout.on('data', (data) => { output += data.toString(); }); server.stderr.on('data', (data) => { console.error(`Server error: ${data}`); }); server.on('close', (code) => { if (code !== 0) { reject(new Error(`Server exited with code ${code}`)); return; } try { // Find and parse JSON in the output const jsonStart = output.indexOf('{'); if (jsonStart >= 0) { const jsonStr = output.substring(jsonStart); const result = JSON.parse(jsonStr); resolve(result); } else { reject(new Error('No valid JSON found in server output')); } } catch (error) { reject(error); } }); // Prepare request const request = { action: 'enhance_prompt', params: { prompt, ...options } }; // Send request to server server.stdin.write(JSON.stringify(request) + '\n'); server.stdin.end(); }); } // Example usage: // enhancePrompt("Fix the bug in the login form", { taskType: "debugging" }) // .then(result => console.log(result.data.enhancedPrompt)) // .catch(error => console.error(error)); module.exports = { enhancePrompt };

This wrapper can be used to integrate the MCP Prompt Enhancer with Claude or any other system.

API

STDIO JSON Protocol

The server accepts JSON requests via standard input and responds with JSON via standard output.

Request Format

{ "action": "action_name", "params": { // Action-specific parameters } }

Response Format

{ "status": "success|error", "data": { // Response data (for success) }, "error": "Error message (for error)" }

Available Actions

enhance_prompt

Enhances a prompt with relevant project context.

Request Parameters:

{ "action": "enhance_prompt", "params": { "prompt": "Your original prompt here", "taskType": "debugging", // Optional "focusFiles": ["path/to/file"], // Optional "includeFullContext": false // Optional } }

Response (Success):

{ "status": "success", "data": { "enhancedPrompt": "# Context Information\n\n...\n\n# Original Prompt\n\nYour original prompt here", "taskType": "debugging", "focusArea": "backend", "contextAdded": true, "contextSize": 250 } }
set_project_context

Manually refreshes or sets the project context.

Request Parameters:

{ "action": "set_project_context", "params": { "projectPath": "/path/to/project", // Optional "forceRefresh": true // Optional } }

Response (Success):

{ "status": "success", "data": { "message": "Project context updated for project-name", "projectName": "project-name", "frameworks": ["react", "express"], "timestamp": "2025-07-10T05:44:24.521Z" } }

Configuration

Create a mcp-prompt-enhancer.config.js file in your project root:

module.exports = { contextCacheTTL: 900, // 15 minutes in seconds maxContextSize: 10000, // Maximum tokens for context ignorePatterns: ['node_modules', 'dist', '.git', 'build', 'coverage', '*.log'], logLevel: 'info' // 'debug', 'info', 'warn', 'error' };

You can also use environment variables:

  • MCP_LOG_LEVEL: Set the logging level

How It Works

  1. Project Analysis: Scans your project structure, detects frameworks, and extracts key patterns
  2. Task Detection: Analyzes prompts to determine the type of work being done
  3. Context Selection: Intelligently selects relevant files and information based on the task
  4. Prompt Enhancement: Combines the original prompt with targeted context
  5. Continuous Learning: Updates its understanding as you work on different parts of the codebase

See the architecture diagram for a detailed visual representation of how the MCP Prompt Enhancer works with Cline (Anthropic's CLI for Claude).

Development

# Clone the repository git clone https://github.com/yourusername/mcp-prompt-enhancer.git cd mcp-prompt-enhancer # Install dependencies npm install # Build the project npm run build # Run tests npm test # Start in development mode npm run dev

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

-
security - not tested
F
license - not found
-
quality - not tested

Intelligently analyzes codebases to enhance LLM prompts with relevant context, featuring adaptive context management and task detection to produce higher quality AI responses.

  1. Overview
    1. Installation
      1. Quick Start
        1. Usage with Claude
          1. Using with Claude
          2. Claude Integration Example
        2. API
          1. STDIO JSON Protocol
          2. Request Format
          3. Response Format
          4. Available Actions
        3. Configuration
          1. How It Works
            1. Development
              1. Contributing
                1. License

                  Related MCP Servers

                  • A
                    security
                    A
                    license
                    A
                    quality
                    Enables integration of Perplexity's AI API with LLMs, delivering advanced chat completion by utilizing specialized prompt templates for tasks like technical documentation, code review, and API documentation.
                    Last updated -
                    1
                    578
                    8
                    JavaScript
                    MIT License
                    • Linux
                  • -
                    security
                    F
                    license
                    -
                    quality
                    Facilitates enhanced interaction with large language models (LLMs) by providing intelligent context management, tool integration, and multi-provider AI model coordination for efficient AI-driven workflows.
                    Last updated -
                    Python
                  • A
                    security
                    A
                    license
                    A
                    quality
                    Chat with your codebase through intelligent code searching without embeddings by breaking files into logical chunks, giving the LLM tools to search these chunks, and letting it find specific code needed to answer your questions.
                    Last updated -
                    8
                    50
                    Python
                    MIT License
                  • -
                    security
                    A
                    license
                    -
                    quality
                    An MCP server that analyzes codebases and generates contextual prompts, making it easier for AI assistants to understand and work with code repositories.
                    Last updated -
                    10
                    Python
                    MIT License

                  View all related MCP servers

                  MCP directory API

                  We provide all the information about MCP servers via our MCP API.

                  curl -X GET 'https://glama.ai/api/mcp/v1/servers/soniankur948/prompt-enhancer'

                  If you have feedback or need assistance with the MCP directory API, please join our Discord server