Skip to main content
Glama

load_prompt

Retrieve prompt templates for ADR generation and analysis on-demand, reducing token usage by loading only when needed.

Instructions

Load a specific prompt or prompt section on-demand. Part of CE-MCP lazy loading system that reduces token usage by ~96% by loading prompts only when needed. Use this to retrieve prompt templates for ADR generation, analysis, deployment, and other operations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptNameYesName of the prompt to load (e.g., "adr-suggestion", "deployment-analysis", "environment-analysis", "research-question", "rule-generation", "analysis", "security")
sectionNoSpecific section within the prompt to load. If not provided, loads the entire prompt. Available sections depend on the prompt.
estimateOnlyNoIf true, returns only token estimate without loading the full prompt content

Implementation Reference

  • Registration of the 'load_prompt' MCP tool in the central TOOL_CATALOG, including metadata, description, input schema, and CE-MCP support.
    TOOL_CATALOG.set('load_prompt', {
      name: 'load_prompt',
      shortDescription: 'Load prompts on-demand (CE-MCP)',
      fullDescription:
        'Loads prompts on-demand instead of eagerly loading all prompts at startup. Part of CE-MCP lazy loading system that reduces token usage by ~96%.',
      category: 'utility',
      complexity: 'simple',
      tokenCost: { min: 100, max: 500 },
      hasCEMCPDirective: true,
      relatedTools: ['search_tools', 'analyze_project_ecosystem'],
      keywords: ['prompt', 'load', 'lazy', 'ce-mcp', 'token', 'optimization'],
      requiresAI: false,
      inputSchema: {
        type: 'object',
        properties: {
          promptName: {
            type: 'string',
            description: 'Name of the prompt to load',
            enum: [
              'adr-suggestion',
              'deployment-analysis',
              'environment-analysis',
              'research-question',
              'rule-generation',
              'analysis',
              'research-integration',
              'validated-pattern',
              'security',
            ],
          },
          section: {
            type: 'string',
            description: 'Specific section within the prompt to load',
          },
          estimateOnly: {
            type: 'boolean',
            description: 'Return only token estimate without loading content',
          },
        },
        required: ['promptName'],
      },
    });
  • Primary handler logic for loading prompts on-demand with caching, section extraction, and metadata validation using PROMPT_CATALOG. This implements the core functionality of the load_prompt tool.
    async loadPrompt(promptName: string, section?: string): Promise<string> {
      const metadata = PROMPT_CATALOG[promptName];
      if (!metadata) {
        throw new Error(`Unknown prompt: ${promptName}`);
      }
    
      // Check cache
      const cacheKey = promptName;
      const cached = this.cache.get(cacheKey);
    
      if (cached && Date.now() < cached.expiry) {
        if (section) {
          const sectionContent = cached.sections.get(section);
          if (sectionContent) {
            return sectionContent;
          }
        }
        return cached.content;
      }
    
      // Load prompt dynamically
      const content = await this.loadPromptFile(metadata.file);
    
      // Parse sections if needed
      const sections = this.parseSections(content, metadata.sections);
    
      // Cache the result
      this.cache.set(cacheKey, {
        content,
        sections,
        expiry: Date.now() + this.cacheTTL * 1000,
      });
    
      if (section) {
        const sectionContent = sections.get(section);
        if (sectionContent) {
          return sectionContent;
        }
        throw new Error(`Section '${section}' not found in prompt '${promptName}'`);
      }
    
      return content;
    }
  • PROMPT_CATALOG constant providing metadata for all available prompts, including file paths, token estimates, categories, sections, and load-on-demand flags. Essential helper for prompt loading.
    export const PROMPT_CATALOG: PromptCatalog = {
      // ADR-related prompts
      'adr-suggestion': {
        file: 'adr-suggestion-prompts.ts',
        tokens: 1830,
        category: 'adr',
        sections: [
          'implicit_decisions',
          'tech_debt',
          'security_decisions',
          'cross_cutting',
          'recommendation_template',
        ],
        loadOnDemand: true,
      },
    
      // Deployment analysis prompts
      'deployment-analysis': {
        file: 'deployment-analysis-prompts.ts',
        tokens: 3150,
        category: 'deployment',
        sections: [
          'readiness_check',
          'validation_criteria',
          'rollback_plan',
          'infrastructure_review',
          'security_scan',
        ],
        loadOnDemand: true,
      },
    
      // Environment analysis prompts
      'environment-analysis': {
        file: 'environment-analysis-prompts.ts',
        tokens: 3050,
        category: 'analysis',
        sections: ['dependency_scan', 'config_validation', 'compliance_check', 'resource_assessment'],
        loadOnDemand: true,
      },
    
      // Research question prompts
      'research-question': {
        file: 'research-question-prompts.ts',
        tokens: 3120,
        category: 'research',
        sections: ['question_generation', 'research_plan', 'source_evaluation', 'synthesis_template'],
        loadOnDemand: true,
      },
    
      // Rule generation prompts
      'rule-generation': {
        file: 'rule-generation-prompts.ts',
        tokens: 2850,
        category: 'rules',
        sections: ['rule_template', 'validation_rules', 'enforcement_policy', 'exception_handling'],
        loadOnDemand: true,
      },
    
      // General analysis prompts
      analysis: {
        file: 'analysis-prompts.ts',
        tokens: 2310,
        category: 'analysis',
        sections: ['project_analysis', 'code_review', 'architecture_assessment', 'quality_metrics'],
        loadOnDemand: true,
      },
    
      // Research integration prompts
      'research-integration': {
        file: 'research-integration-prompts.ts',
        tokens: 1785,
        category: 'research',
        sections: ['integration_strategy', 'synthesis_plan', 'recommendation_format'],
        loadOnDemand: true,
      },
    
      // Validated pattern prompts
      'validated-pattern': {
        file: 'validated-pattern-prompts.ts',
        tokens: 1565,
        category: 'deployment',
        sections: ['pattern_detection', 'validation_criteria', 'deployment_guidance'],
        loadOnDemand: true,
      },
    
      // Security prompts
      security: {
        file: 'security-prompts.ts',
        tokens: 1270,
        category: 'security',
        sections: ['vulnerability_scan', 'masking_rules', 'compliance_check'],
        loadOnDemand: true,
      },
    
      // Main index (orchestration)
      index: {
        file: 'index.ts',
        tokens: 3700,
        category: 'analysis',
        sections: ['orchestration', 'execution_flow', 'error_handling'],
        dependencies: ['adr-suggestion', 'deployment-analysis', 'analysis'],
        loadOnDemand: false, // Core orchestration loaded at startup
      },
    };
  • Input schema definition for the load_prompt tool, specifying parameters like promptName (with enum), section, and estimateOnly.
    inputSchema: {
      type: 'object',
      properties: {
        promptName: {
          type: 'string',
          description: 'Name of the prompt to load',
          enum: [
            'adr-suggestion',
            'deployment-analysis',
            'environment-analysis',
            'research-question',
            'rule-generation',
            'analysis',
            'research-integration',
            'validated-pattern',
            'security',
          ],
        },
        section: {
          type: 'string',
          description: 'Specific section within the prompt to load',
        },
        estimateOnly: {
          type: 'boolean',
          description: 'Return only token estimate without loading content',
        },
      },
      required: ['promptName'],
    },
  • Internal sandbox operation handler opLoadPrompt used by CE-MCP directives for lazy prompt loading within orchestrated workflows.
    private async opLoadPrompt(
      args: Record<string, unknown> | undefined,
      context: SandboxContext
    ): Promise<string> {
      const promptName = args?.['name'] as string;
      const section = args?.['section'] as string | undefined;
    
      if (!promptName) {
        throw new Error('loadPrompt requires "name" argument');
      }
    
      // Check prompt cache with LRU tracking
      const cacheKey = `prompt:${promptName}:${section || 'full'}`;
      const cached = this.promptCache.get(cacheKey);
      if (cached && Date.now() < cached.expiry) {
        cached.lastAccess = Date.now();
        this.cacheHits++;
        return cached.result;
      }
      this.cacheMisses++;
      if (cached) {
        this.promptCache.delete(cacheKey);
      }
    
      // Load prompt from file system
      // In full implementation, would use prompt catalog
      const promptPath = join(context.projectPath, 'src', 'prompts', `${promptName}.ts`);
    
      try {
        const content = await readFile(promptPath, 'utf-8');
    
        // Evict old entries if cache is full
        this.evictOldestEntries(this.promptCache, MAX_PROMPT_CACHE_SIZE);
    
        // Cache the result with LRU tracking
        this.promptCache.set(cacheKey, {
          result: content,
          expiry: Date.now() + this.config.prompts.cacheTTL * 1000,
          lastAccess: Date.now(),
        });
    
        return content;
      } catch {
        // Return placeholder if prompt not found
        return `[Prompt: ${promptName}${section ? `:${section}` : ''}]`;
      }
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about being part of a 'lazy loading system that reduces token usage by ~96%', which explains the performance rationale. However, it doesn't disclose other behavioral traits like whether this is a read-only operation, what happens on failure, or if there are rate limits. The description doesn't contradict any annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey purpose and context. The first sentence states the core functionality, while the second provides system context and usage examples. There's no wasted verbiage, though it could be slightly more structured with explicit usage guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, and no output schema, the description provides adequate but minimal context. It explains the tool's role in a lazy loading system and gives examples of prompt types, but doesn't describe what the tool returns, error conditions, or how it integrates with the broader workflow. Given the lack of annotations and output schema, more behavioral context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing full documentation of all three parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema - it mentions loading 'prompt or prompt section' which aligns with the 'promptName' and 'section' parameters, but provides no additional details about parameter usage, constraints, or interactions. With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Load a specific prompt or prompt section on-demand' with the verb 'load' and resource 'prompt/prompt section'. It distinguishes from siblings by mentioning it's part of a 'lazy loading system' for prompts, but doesn't explicitly differentiate from specific sibling tools like 'generate_adr_bootstrap' or 'create_research_template' that might also involve prompts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context: it's for 'retrieving prompt templates for ADR generation, analysis, deployment, and other operations' and part of a system that 'reduces token usage by ~96%'. However, it doesn't explicitly state when to use this versus alternatives like 'generate_adr_bootstrap' or 'create_research_template', nor does it provide exclusions or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server