Skip to main content
Glama
hireshBrem

Coding Prompt Engineer MCP Server

by hireshBrem

rewrite_coding_prompt

Improve AI IDE responses by restructuring coding prompts with language-specific context and clear requirements before submission.

Instructions

Rewrites user's coding prompts before passing to AI IDE (e.g. Cursor AI) to get the best results from AI IDE.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe raw user's prompt that needs rewriting
languageYesThe programming language of the code

Implementation Reference

  • index.ts:68-101 (handler)
    Core implementation of the rewrite_coding_prompt tool: uses Anthropic Claude model to engineer an optimal prompt for Cursor AI based on user input.
    async function rewriteCodingPrompt(prompt: string, language: string = "typescript") {
      // Check if API key is available
      if (!process.env.ANTHROPIC_API_KEY) {
        throw new Error("ANTHROPIC_API_KEY environment variable is not set. Using fallback formatting.");
      }
      
      // Initialize Anthropic model
      const model = new ChatAnthropic({
        apiKey: process.env.ANTHROPIC_API_KEY,
        temperature: 0.2, // Low temperature for more consistent, structured output
        modelName: "claude-3-7-sonnet-20250219", // Using a fast, cost-effective model
      });
      
      // Create the system prompt for Claude
      const systemPromptText = `You are an expert prompt engineer specializing in creating optimal prompts for code-related AI tasks.
    Your job is to take a user's raw Cursor AI's prompt and transform it into a well-structured, detailed prompt that will get the best results from Cursor AI.
    
    Your output should ONLY be the edited prompt that will get the best results from Cursor AI or any IDE with no additional commentary, explanations, or metadata.`;
    
      // Create message objects
      const systemMessage = new SystemMessage(systemPromptText);
      const userMessage = new HumanMessage(`Here is my raw prompt: \n\n${prompt}\n\nPlease format this into an optimal prompt for Cursor AI. The programming language is ${language}.`);
      
      // Call the model with the messages
      const response = await model.invoke([systemMessage, userMessage]);
      
      // Ensure we have a valid response
      if (!response || typeof response.content !== 'string') {
        throw new Error('Invalid response from Claude API');
      }
    
      // Return the formatted prompt
      return response.content;
    }
  • JSON schema defining the input parameters for the rewrite_coding_prompt tool.
    inputSchema: {
      type: "object",
      properties: {
        prompt: {
          type: "string",
          description: "The raw user's prompt that needs rewriting"
        },
        language: {
          type: "string",
          description: "The programming language of the code"
        }
      },
      required: ["prompt", "language"],
      title: "rewrite_coding_promptArguments"
    }
  • index.ts:21-39 (registration)
    Tool object definition and registration including name, description, and schema.
    const REWRITE_CODING_PROMPT_TOOL: Tool = {
      name: "rewrite_coding_prompt",
      description: "Rewrites user's coding prompts before passing to AI IDE (e.g. Cursor AI) to get the best results from AI IDE.",
      inputSchema: {
        type: "object",
        properties: {
          prompt: {
            type: "string",
            description: "The raw user's prompt that needs rewriting"
          },
          language: {
            type: "string",
            description: "The programming language of the code"
          }
        },
        required: ["prompt", "language"],
        title: "rewrite_coding_promptArguments"
      }
    };
  • index.ts:104-106 (registration)
    Registers the list tools handler that exposes the rewrite_coding_prompt tool.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [REWRITE_CODING_PROMPT_TOOL],  
    }));
  • index.ts:117-127 (registration)
    Dispatch case in CallToolRequestSchema handler that invokes the rewriteCodingPrompt function.
    case "rewrite_coding_prompt": {
      if (!isPromptFormatArgs(args)) {
        throw new Error("Invalid arguments for rewrite_coding_prompt");
      }
      const { prompt, language } = args;
      const rewrittenPrompt = await rewriteCodingPrompt(prompt, language);
      return {
        content: [{ type: "text", text: rewrittenPrompt }],
        isError: false,
      };
    }
  • Type guard helper function for validating arguments to rewrite_coding_prompt.
    function isPromptFormatArgs(args: unknown): args is { 
      prompt: string; 
      language: string;
    } {
      return (
        typeof args === "object" &&
        args !== null &&
        "prompt" in args &&
        typeof (args as { prompt: string }).prompt === "string" &&
        "language" in args &&
        typeof (args as { language: string }).language === "string"
      );
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool 'rewrites' prompts but doesn't explain how the rewriting works (e.g., formatting changes, clarity improvements, or specific optimizations), what the output looks like, or any constraints like rate limits or error conditions. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose and goal. It's front-loaded with the main action and avoids unnecessary details. However, it could be slightly more structured by explicitly mentioning the parameters or output, but overall, it's concise and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (rewriting prompts for AI IDEs), the description is insufficient. With no annotations and no output schema, it fails to explain key aspects like the rewriting process, output format, or any behavioral traits. The description alone doesn't provide enough context for an AI agent to understand how to effectively use or interpret results from this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with clear documentation for both parameters ('prompt' and 'language'). The description doesn't add any additional meaning or context beyond what the schema provides, such as examples or formatting tips. With high schema coverage, the baseline score of 3 is appropriate, as the schema handles the parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Rewrites user's coding prompts before passing to AI IDE (e.g. Cursor AI) to get the best results from AI IDE.' It specifies the verb ('rewrites'), resource ('user's coding prompts'), and goal ('to get the best results from AI IDE'). However, without sibling tools, it cannot demonstrate differentiation from alternatives, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus other methods or tools. It states the tool's function but offers no context about prerequisites, alternatives, or specific scenarios where it's most effective. This lack of usage instructions limits its practical utility for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hireshBrem/prompt-engineer-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server