Skip to main content
Glama
grandinh

MCP Prompt Optimizer

by grandinh

optimize_prompt

Analyze and optimize AI prompts to improve clarity, detect risks, and add domain-specific requirements for better AI interaction quality.

Instructions

Analyze and optimize a user prompt using the OTA (Optimize-Then-Answer) Framework. Returns clarity score, domain classification, risk flags, targeted questions (if needed), and an enhanced prompt ready for AI processing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe user prompt to optimize
contextNoOptional additional context about the request

Implementation Reference

  • The handler function that executes the 'optimize_prompt' tool logic. It extracts the prompt and context from the request, calls the PromptOptimizer.optimize method, formats the response including analysis, questions if needed, and the optimized prompt, then returns it in the MCP format.
    if (request.params.name === 'optimize_prompt') {
      const { prompt, context } = request.params.arguments as { prompt: string; context?: string };
    
      try {
        const result = await optimizer.optimize(prompt, context);
    
        let response = `${result.optimization_header}\n\n`;
    
        if (result.needs_clarification && result.questions.length > 0) {
          response += `**⚠️ Clarification Needed** (Clarity: ${(result.clarity_score * 100).toFixed(0)}%)\n\n`;
          response += `**Please answer these questions before I proceed:**\n\n`;
          result.questions.forEach((q, i) => {
            response += `${i + 1}. ${q}\n`;
          });
          response += `\n---\n\n`;
        } else {
          response += `**✓ Ready to Process** (Clarity: ${(result.clarity_score * 100).toFixed(0)}%)\n\n`;
        }
    
        response += `**Domain:** ${result.domain}\n`;
        response += `**Risk Flags:** ${result.risk_flags.length > 0 ? result.risk_flags.join(', ') : 'None'}\n\n`;
    
        if (!result.needs_clarification) {
          response += `**Optimized Prompt:**\n\`\`\`\n${result.optimized_prompt}\n\`\`\`\n\n`;
          response += `Use this enhanced prompt for the AI request to ensure comprehensive, domain-appropriate output.`;
        }
    
        return {
          content: [
            {
              type: 'text',
              text: response,
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: 'text',
              text: `Error optimizing prompt: ${error}`,
            },
          ],
          isError: true,
        };
      }
    }
  • Tool schema definition including name, description, and input schema specifying required 'prompt' and optional 'context' parameters.
    const OPTIMIZE_TOOL: Tool = {
      name: 'optimize_prompt',
      description:
        'Analyze and optimize a user prompt using the OTA (Optimize-Then-Answer) Framework. ' +
        'Returns clarity score, domain classification, risk flags, targeted questions (if needed), ' +
        'and an enhanced prompt ready for AI processing.',
      inputSchema: {
        type: 'object',
        properties: {
          prompt: {
            type: 'string',
            description: 'The user prompt to optimize',
          },
          context: {
            type: 'string',
            description: 'Optional additional context about the request',
          },
        },
        required: ['prompt'],
      },
    };
  • src/index.ts:419-423 (registration)
    Registers the 'optimize_prompt' tool by returning it in the ListToolsRequest handler response.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return {
        tools: [OPTIMIZE_TOOL],
      };
    });
  • Core helper method in the PromptOptimizer class that orchestrates prompt analysis, domain detection, clarity scoring, risk assessment, question generation, and creation of the optimized prompt.
    async optimize(prompt: string, context?: string): Promise<PromptAnalysis> {
      if (!this.framework) {
        await this.loadFramework();
      }
    
      const analysis = this.analyzePrompt(prompt);
      const domain = analysis.domain!;
      const clarityScore = analysis.clarity_score!;
      const risks = analysis.risk_flags!;
    
      const questions = this.generateQuestions(prompt, domain, clarityScore);
      const needsClarification = clarityScore < 0.6 || risks.some(r => ['policy', 'safety'].includes(r));
    
      const optimizedPrompt = this.createOptimizedPrompt(prompt, domain, clarityScore, risks);
    
      const header = `[OPTIMIZED] Domain: ${domain} | Clarity: ${(clarityScore * 100).toFixed(0)}% | ` +
        `Risks: ${risks.length > 0 ? risks.join(', ') : 'none'}`;
    
      return {
        domain,
        clarity_score: clarityScore,
        risk_flags: risks,
        questions,
        optimized_prompt: optimizedPrompt,
        optimization_header: header,
        assumptions: [],
        needs_clarification: needsClarification,
      };
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that the tool returns 'clarity score, domain classification, risk flags, targeted questions (if needed), and an enhanced prompt,' which gives some insight into outputs. However, it doesn't disclose critical behavioral traits like whether this is a read-only operation, potential side effects, performance characteristics, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured in a single sentence that efficiently conveys the core functionality and outputs. Every part of the sentence serves a purpose, with no redundant information. It could be slightly improved by front-loading the purpose more explicitly, but it's already quite efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (analyzing and optimizing prompts), no annotations, and no output schema, the description is somewhat incomplete. It lists output components but doesn't explain their format or significance. For a tool that returns multiple structured outputs, more detail on what each component means would be helpful for an AI agent to interpret results effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the input schema already documents both parameters thoroughly. The description doesn't add any meaningful semantic context beyond what's in the schema (e.g., it doesn't explain how 'context' influences optimization or provide examples). This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze and optimize a user prompt using the OTA Framework.' It specifies the verb (analyze and optimize) and resource (user prompt), and mentions the framework used. However, with no sibling tools, there's no explicit differentiation needed, so it doesn't achieve the highest score for sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, constraints, or scenarios where this optimization is particularly beneficial. The lack of sibling tools means there's no need to differentiate, but general usage context is still missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/grandinh/mcp-prompt-optimizer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server