Skip to main content
Glama

Prompt Optimizer

optimize_prompt

Analyze and improve LLM prompts by scoring clarity, specificity, structure, and completeness. Returns optimized rewrites with explanations of changes for better AI responses.

Instructions

Analyze and improve an LLM prompt. Scores clarity, specificity, structure, and completeness. Returns an optimized rewrite with a summary of what changed and why. Powered by Claude.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe LLM prompt to analyze and/or improve
modelNoTarget model (e.g. 'gpt-4o', 'claude-3-5-sonnet')gpt-4o
taskNoWhat this prompt is trying to accomplish
modeNo'both' returns analysis + improved prompt; 'analyze' scores only; 'improve' rewrites onlyboth

Implementation Reference

  • The "optimize_prompt" tool registration and handler implementation. It uses a helper function 'callToolApi' to communicate with an external Agent Toolbelt API.
    server.registerTool(
      "optimize_prompt",
      {
        title: "Prompt Optimizer",
        description:
          "Analyze and improve an LLM prompt. Scores clarity, specificity, structure, and completeness. " +
          "Returns an optimized rewrite with a summary of what changed and why. Powered by Claude.",
        inputSchema: {
          prompt: z.string().describe("The LLM prompt to analyze and/or improve"),
          model: z.string().default("gpt-4o").describe("Target model (e.g. 'gpt-4o', 'claude-3-5-sonnet')"),
          task: z.string().optional().describe("What this prompt is trying to accomplish"),
          mode: z
            .enum(["improve", "analyze", "both"])
            .default("both")
            .describe("'both' returns analysis + improved prompt; 'analyze' scores only; 'improve' rewrites only"),
        },
      },
      async ({ prompt, model, task, mode }) => {
        const result = await callToolApi("prompt-optimizer", { prompt, model, task, mode });
        const data = result as any;
        const r = data.result;
    
        const lines: string[] = [`**Prompt Optimizer** (targeting: ${r.model})`];
    
        if (r.scores) {
          lines.push(
            "",
            "**Scores:**",
            `  Clarity:      ${r.scores.clarity}/10`,
            `  Specificity:  ${r.scores.specificity}/10`,
            `  Structure:    ${r.scores.structure}/10`,
            `  Completeness: ${r.scores.completeness}/10`,
            `  Overall:      ${r.scores.overall}/10`
          );
        }
    
        if (r.issues?.length) {
          lines.push("", "**Issues found:**", ...r.issues.map((i: string) => `  - ${i}`));
        }
    
        if (r.suggestions?.length) {
          lines.push("", "**Suggestions:**", ...r.suggestions.map((s: string) => `  - ${s}`));
        }
    
        if (r.improvedPrompt) {
          lines.push("", "**Improved prompt:**", "```", r.improvedPrompt, "```");
        }
    
        if (r.changesSummary?.length) {
          lines.push("", "**Changes made:**", ...r.changesSummary.map((c: string) => `  - ${c}`));
        }
    
        lines.push(
          "",
          `**Token stats:** original: ${r.tokenStats.original}${r.tokenStats.improved ? ` → improved: ${r.tokenStats.improved} (${r.tokenStats.delta > 0 ? "+" : ""}${r.tokenStats.delta})` : ""}`
        );
    
        return { content: [{ type: "text" as const, text: lines.join("\n") }] };
      }
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the underlying engine ('Powered by Claude') and explains the return format (rewrite + summary), but omits safety characteristics (read-only vs. destructive), rate limits, or error behaviors that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste. Front-loaded with the core action ('Analyze and improve'), followed by evaluation criteria, output specification, and implementation note. Every sentence earns its place with no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description appropriately compensates by detailing the return value ('optimized rewrite with a summary'). For a 4-parameter tool with simple types and complete schema coverage, this is sufficient, though it could briefly mention error handling or input validation limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds context about what gets analyzed (the four scoring dimensions) and what outputs to expect, but doesn't add parameter-specific semantics beyond what's already clearly documented in the schema property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides specific verbs ('Analyze and improve') and a clear resource ('LLM prompt'), distinguishing it from text analysis siblings like 'extract_from_text' or 'compare_documents'. It specifically mentions scoring dimensions (clarity, specificity, structure, completeness) that uniquely identify this tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by explaining the output ('Returns an optimized rewrite'), but lacks explicit when-to-use guidance or alternative comparisons. While the 'mode' parameter schema explains the three operating modes, the description itself doesn't state prerequisites or when to prefer this over siblings like 'count_tokens' or 'pack_context_window'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marras0914/agent-toolbelt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server