Skip to main content
Glama

enhance_prompt

Transform vague or unclear prompts into specific, actionable instructions by adding detail, context, and clarity for better AI responses.

Instructions

be specific|more detail|clarify|elaborate|vague|enhance|improve prompt - Transform vague requests into clear, actionable prompts

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesOriginal prompt to enhance
contextNoAdditional context or project information
enhancement_typeNoType of enhancement (default: all)

Implementation Reference

  • The main handler function that executes the enhance_prompt tool. It analyzes the input prompt for clarity, specificity, and context needs, then constructs an enhanced prompt with structured sections like Objective, Background, Requirements, and Quality Criteria.
    export async function enhancePrompt(args: { prompt: string; context?: string; enhancement_type?: string }): Promise<ToolResult> {
      const { prompt, context = '', enhancement_type = 'all' } = args;
      
      // Enhancement logic
      const enhancements: Record<string, string[]> = {
        clarity: [],
        specificity: [],
        context: [],
        structure: []
      };
      
      // Analyze original prompt
      const promptLength = prompt.length;
      const hasQuestion = prompt.includes('?');
      const hasSpecificTerms = /\b(implement|develop|modify|analyze|debug|refactor)\b/i.test(prompt);
      
      // Apply enhancements based on type
      if (enhancement_type === 'clarity' || enhancement_type === 'all') {
        if (promptLength < 20) {
          enhancements.clarity.push('Add more specific description');
        }
        if (!hasQuestion && !hasSpecificTerms) {
          enhancements.clarity.push('Convert to clear request or question format');
        }
        if (hasQuestion && promptLength > 100) {
          enhancements.clarity.push('Question is clear and detailed');
        }
      }
      
      if (enhancement_type === 'specificity' || enhancement_type === 'all') {
        if (!prompt.match(/\b(language|framework|library|version)\b/i)) {
          enhancements.specificity.push('Specify tech stack');
        }
        if (!prompt.match(/\b(input|output|result|format)\b/i)) {
          enhancements.specificity.push('Define expected input/output');
        }
      }
      
      if (enhancement_type === 'context' || enhancement_type === 'all') {
        if (!prompt.match(/\b(purpose|reason|background|situation)\b/i)) {
          enhancements.context.push('Add task purpose and background');
        }
        if (context) {
          enhancements.context.push('Integrate provided context');
        }
      }
      
      // Generate enhanced prompt
      let enhancedPrompt = prompt;
      
      // Build enhanced version
      const components = [];
      
      // Add objective
      components.push(`**Objective**: ${prompt}`);
      
      // Add context if provided
      if (context) {
        components.push(`**Background**: ${context}`);
      }
      
      // Add specific requirements based on analysis
      const requirements = [];
      if (enhancements.specificity.includes('Specify tech stack')) {
        requirements.push('- Please specify the language/framework to use');
      }
      if (enhancements.specificity.includes('Define expected input/output')) {
        requirements.push('- Please describe expected input and output formats');
      }
      
      if (requirements.length > 0) {
        components.push(`**Requirements**:\n${requirements.join('\n')}`);
      }
      
      // Add quality considerations
      const quality = [
        '- Include error handling',
        '- Testable structure',
        '- Extensible design',
        '- Performance considerations',
        '- Clear documentation',
        '- Follow best practices',
        '- Code readability',
        '- Maintainability'
      ];
      components.push(`**Quality Criteria**:\n${quality.join('\n')}`);
      
      enhancedPrompt = components.join('\n\n');
      
      const result = {
        action: 'enhance_prompt',
        original: prompt,
        enhanced: enhancedPrompt,
        enhancements,
        improvements: [
          enhancements.clarity.length > 0 ? `Clarity improved (${enhancements.clarity.length} items)` : null,
          enhancements.specificity.length > 0 ? `Specificity added (${enhancements.specificity.length} items)` : null,
          enhancements.context.length > 0 ? `Context enriched (${enhancements.context.length} items)` : null
        ].filter(Boolean),
        status: 'success'
      };
      
      return {
        content: [{ type: 'text', text: `Original: ${prompt}\n\nEnhanced:\n${enhancedPrompt}\n\nImprovements: ${result.improvements.join(', ')}` }]
      };
    }
  • ToolDefinition object defining the name, description, input schema (prompt required, optional context and enhancement_type), and annotations for the enhance_prompt tool.
    export const enhancePromptDefinition: ToolDefinition = {
      name: 'enhance_prompt',
      description: 'be specific|more detail|clarify|elaborate|vague|enhance|improve prompt - Transform vague requests into clear, actionable prompts',
      inputSchema: {
        type: 'object',
        properties: {
          prompt: { type: 'string', description: 'Original prompt to enhance' },
          context: { type: 'string', description: 'Additional context or project information' },
          enhancement_type: {
            type: 'string',
            enum: ['clarity', 'specificity', 'context', 'all'],
            description: 'Type of enhancement (default: all)'
          }
        },
        required: ['prompt']
      },
      annotations: {
        title: 'Enhance Prompt',
        audience: ['user', 'assistant']
      }
    };
  • src/index.ts:104-160 (registration)
    The enhancePromptDefinition is included in the tools array (specifically at line 151), which is used by the MCP server for listing available tools.
    const tools: ToolDefinition[] = [
      // Time Utility Tools
      getCurrentTimeDefinition,
    
      // Semantic Code Analysis Tools (Serena-inspired)
      findSymbolDefinition,
      findReferencesDefinition,
    
      // Sequential Thinking Tools
      createThinkingChainDefinition,
      analyzeProblemDefinition,
      stepByStepAnalysisDefinition,
      breakDownProblemDefinition,
      thinkAloudProcessDefinition,
      formatAsPlanDefinition,
    
      // Browser Development Tools
      monitorConsoleLogsDefinition,
      inspectNetworkRequestsDefinition,
    
      // Memory Management Tools
      saveMemoryDefinition,
      recallMemoryDefinition,
      listMemoriesDefinition,
      deleteMemoryDefinition,
      searchMemoriesDefinition,
      updateMemoryDefinition,
      autoSaveContextDefinition,
      restoreSessionContextDefinition,
      prioritizeMemoryDefinition,
      startSessionDefinition,
    
      // Convention Tools
      getCodingGuideDefinition,
      applyQualityRulesDefinition,
      validateCodeQualityDefinition,
      analyzeComplexityDefinition,
      checkCouplingCohesionDefinition,
      suggestImprovementsDefinition,
    
      // Planning Tools
      generatePrdDefinition,
      createUserStoriesDefinition,
      analyzeRequirementsDefinition,
      featureRoadmapDefinition,
    
      // Prompt Enhancement Tools
      enhancePromptDefinition,
      analyzePromptDefinition,
      enhancePromptGeminiDefinition,
    
      // Reasoning Tools
      applyReasoningFrameworkDefinition,
    
      // UI Preview Tools
      previewUiAsciiDefinition
    ];
  • src/index.ts:603-700 (registration)
    The switch statement in executeToolCall dispatches 'enhance_prompt' calls to the enhancePrompt handler function (lines 682-683). This is the runtime registration for tool execution.
    async function executeToolCall(name: string, args: unknown): Promise<CallToolResult> {
      switch (name) {
        // Time Utility Tools
        case 'get_current_time':
          return await getCurrentTime(args as any) as CallToolResult;
    
        // Semantic Code Analysis Tools
        case 'find_symbol':
          return await findSymbol(args as any) as CallToolResult;
        case 'find_references':
          return await findReferences(args as any) as CallToolResult;
    
        // Sequential Thinking Tools
        case 'create_thinking_chain':
          return await createThinkingChain(args as any) as CallToolResult;
        case 'analyze_problem':
          return await analyzeProblem(args as any) as CallToolResult;
        case 'step_by_step_analysis':
          return await stepByStepAnalysis(args as any) as CallToolResult;
        case 'break_down_problem':
          return await breakDownProblem(args as any) as CallToolResult;
        case 'think_aloud_process':
          return await thinkAloudProcess(args as any) as CallToolResult;
        case 'format_as_plan':
          return await formatAsPlan(args as any) as CallToolResult;
    
        // Browser Development Tools
        case 'monitor_console_logs':
          return await monitorConsoleLogs(args as any) as CallToolResult;
        case 'inspect_network_requests':
          return await inspectNetworkRequests(args as any) as CallToolResult;
    
        // Memory Management Tools
        case 'save_memory':
          return await saveMemory(args as any) as CallToolResult;
        case 'recall_memory':
          return await recallMemory(args as any) as CallToolResult;
        case 'list_memories':
          return await listMemories(args as any) as CallToolResult;
        case 'delete_memory':
          return await deleteMemory(args as any) as CallToolResult;
        case 'search_memories':
          return await searchMemoriesHandler(args as any) as CallToolResult;
        case 'update_memory':
          return await updateMemory(args as any) as CallToolResult;
        case 'auto_save_context':
          return await autoSaveContext(args as any) as CallToolResult;
        case 'restore_session_context':
          return await restoreSessionContext(args as any) as CallToolResult;
        case 'prioritize_memory':
          return await prioritizeMemory(args as any) as CallToolResult;
        case 'start_session':
          return await startSession(args as any) as CallToolResult;
    
        // Convention Tools
        case 'get_coding_guide':
          return await getCodingGuide(args as any) as CallToolResult;
        case 'apply_quality_rules':
          return await applyQualityRules(args as any) as CallToolResult;
        case 'validate_code_quality':
          return await validateCodeQuality(args as any) as CallToolResult;
        case 'analyze_complexity':
          return await analyzeComplexity(args as any) as CallToolResult;
        case 'check_coupling_cohesion':
          return await checkCouplingCohesion(args as any) as CallToolResult;
        case 'suggest_improvements':
          return await suggestImprovements(args as any) as CallToolResult;
    
        // Planning Tools
        case 'generate_prd':
          return await generatePrd(args as any) as CallToolResult;
        case 'create_user_stories':
          return await createUserStories(args as any) as CallToolResult;
        case 'analyze_requirements':
          return await analyzeRequirements(args as any) as CallToolResult;
        case 'feature_roadmap':
          return await featureRoadmap(args as any) as CallToolResult;
    
        // Prompt Enhancement Tools
        case 'enhance_prompt':
          return await enhancePrompt(args as any) as CallToolResult;
        case 'analyze_prompt':
          return await analyzePrompt(args as any) as CallToolResult;
        case 'enhance_prompt_gemini':
          return await enhancePromptGemini(args as any) as CallToolResult;
    
        // Reasoning Tools
        case 'apply_reasoning_framework':
          return await applyReasoningFramework(args as any) as CallToolResult;
    
        // UI Preview Tools
        case 'preview_ui_ascii':
          return await previewUiAscii(args as any) as CallToolResult;
    
        default:
          throw new McpError(ErrorCode.MethodNotFound, `Unknown tool: ${name}`);
      }
    }
  • src/index.ts:74-75 (registration)
    Import statement bringing in the enhancePrompt handler and its definition from the implementation file.
    import { enhancePrompt, enhancePromptDefinition } from './tools/prompt/enhancePrompt.js';
    import { enhancePromptGemini, enhancePromptGeminiDefinition } from './tools/prompt/enhancePromptGemini.js';
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided beyond the title, so the description carries the full burden. It describes the tool's behavior as transforming prompts, which is clear, but lacks details on output format, potential side effects, or limitations. This is adequate given the absence of annotations but misses opportunities to add richer context like response structure or constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, using a single sentence with no wasted words. It efficiently communicates the core purpose and key enhancement types, making it easy to scan and understand quickly. Every element earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations beyond title), the description is complete enough for basic use but lacks depth. It covers the purpose and enhancement types, but without output schema or rich annotations, it should ideally explain more about the transformation process or result format to fully compensate for missing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (prompt, context, enhancement_type) with descriptions and enums. The description adds minimal value beyond this by mentioning enhancement types in a list, but does not explain parameter interactions or provide additional semantics. Baseline 3 is appropriate as the schema handles most of the documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Transform vague requests into clear, actionable prompts') and distinguishes it from siblings like 'analyze_prompt' or 'enhance_prompt_gemini' by focusing on transformation rather than analysis or alternative enhancement methods. The title annotation 'Enhance Prompt' reinforces this, but the description adds valuable specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through phrases like 'Transform vague requests' and lists enhancement types (clarity, specificity, context, all), providing clear guidance on when to use it for prompt improvement. However, it does not explicitly state when not to use it or name alternatives like 'enhance_prompt_gemini', which slightly limits differentiation from siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ssdeanx/ssd-ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server