Skip to main content
Glama

load_prompt

Retrieve prompt templates for ADR generation and analysis on-demand, reducing token usage by loading only when needed.

Instructions

Load a specific prompt or prompt section on-demand. Part of CE-MCP lazy loading system that reduces token usage by ~96% by loading prompts only when needed. Use this to retrieve prompt templates for ADR generation, analysis, deployment, and other operations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptNameYesName of the prompt to load (e.g., "adr-suggestion", "deployment-analysis", "environment-analysis", "research-question", "rule-generation", "analysis", "security")
sectionNoSpecific section within the prompt to load. If not provided, loads the entire prompt. Available sections depend on the prompt.
estimateOnlyNoIf true, returns only token estimate without loading the full prompt content

Implementation Reference

  • Registration of the 'load_prompt' MCP tool in the central TOOL_CATALOG, including metadata, description, input schema, and CE-MCP support.
    TOOL_CATALOG.set('load_prompt', { name: 'load_prompt', shortDescription: 'Load prompts on-demand (CE-MCP)', fullDescription: 'Loads prompts on-demand instead of eagerly loading all prompts at startup. Part of CE-MCP lazy loading system that reduces token usage by ~96%.', category: 'utility', complexity: 'simple', tokenCost: { min: 100, max: 500 }, hasCEMCPDirective: true, relatedTools: ['search_tools', 'analyze_project_ecosystem'], keywords: ['prompt', 'load', 'lazy', 'ce-mcp', 'token', 'optimization'], requiresAI: false, inputSchema: { type: 'object', properties: { promptName: { type: 'string', description: 'Name of the prompt to load', enum: [ 'adr-suggestion', 'deployment-analysis', 'environment-analysis', 'research-question', 'rule-generation', 'analysis', 'research-integration', 'validated-pattern', 'security', ], }, section: { type: 'string', description: 'Specific section within the prompt to load', }, estimateOnly: { type: 'boolean', description: 'Return only token estimate without loading content', }, }, required: ['promptName'], }, });
  • Primary handler logic for loading prompts on-demand with caching, section extraction, and metadata validation using PROMPT_CATALOG. This implements the core functionality of the load_prompt tool.
    async loadPrompt(promptName: string, section?: string): Promise<string> { const metadata = PROMPT_CATALOG[promptName]; if (!metadata) { throw new Error(`Unknown prompt: ${promptName}`); } // Check cache const cacheKey = promptName; const cached = this.cache.get(cacheKey); if (cached && Date.now() < cached.expiry) { if (section) { const sectionContent = cached.sections.get(section); if (sectionContent) { return sectionContent; } } return cached.content; } // Load prompt dynamically const content = await this.loadPromptFile(metadata.file); // Parse sections if needed const sections = this.parseSections(content, metadata.sections); // Cache the result this.cache.set(cacheKey, { content, sections, expiry: Date.now() + this.cacheTTL * 1000, }); if (section) { const sectionContent = sections.get(section); if (sectionContent) { return sectionContent; } throw new Error(`Section '${section}' not found in prompt '${promptName}'`); } return content; }
  • PROMPT_CATALOG constant providing metadata for all available prompts, including file paths, token estimates, categories, sections, and load-on-demand flags. Essential helper for prompt loading.
    export const PROMPT_CATALOG: PromptCatalog = { // ADR-related prompts 'adr-suggestion': { file: 'adr-suggestion-prompts.ts', tokens: 1830, category: 'adr', sections: [ 'implicit_decisions', 'tech_debt', 'security_decisions', 'cross_cutting', 'recommendation_template', ], loadOnDemand: true, }, // Deployment analysis prompts 'deployment-analysis': { file: 'deployment-analysis-prompts.ts', tokens: 3150, category: 'deployment', sections: [ 'readiness_check', 'validation_criteria', 'rollback_plan', 'infrastructure_review', 'security_scan', ], loadOnDemand: true, }, // Environment analysis prompts 'environment-analysis': { file: 'environment-analysis-prompts.ts', tokens: 3050, category: 'analysis', sections: ['dependency_scan', 'config_validation', 'compliance_check', 'resource_assessment'], loadOnDemand: true, }, // Research question prompts 'research-question': { file: 'research-question-prompts.ts', tokens: 3120, category: 'research', sections: ['question_generation', 'research_plan', 'source_evaluation', 'synthesis_template'], loadOnDemand: true, }, // Rule generation prompts 'rule-generation': { file: 'rule-generation-prompts.ts', tokens: 2850, category: 'rules', sections: ['rule_template', 'validation_rules', 'enforcement_policy', 'exception_handling'], loadOnDemand: true, }, // General analysis prompts analysis: { file: 'analysis-prompts.ts', tokens: 2310, category: 'analysis', sections: ['project_analysis', 'code_review', 'architecture_assessment', 'quality_metrics'], loadOnDemand: true, }, // Research integration prompts 'research-integration': { file: 'research-integration-prompts.ts', tokens: 1785, category: 'research', sections: ['integration_strategy', 'synthesis_plan', 'recommendation_format'], loadOnDemand: true, }, // Validated pattern prompts 'validated-pattern': { file: 'validated-pattern-prompts.ts', tokens: 1565, category: 'deployment', sections: ['pattern_detection', 'validation_criteria', 'deployment_guidance'], loadOnDemand: true, }, // Security prompts security: { file: 'security-prompts.ts', tokens: 1270, category: 'security', sections: ['vulnerability_scan', 'masking_rules', 'compliance_check'], loadOnDemand: true, }, // Main index (orchestration) index: { file: 'index.ts', tokens: 3700, category: 'analysis', sections: ['orchestration', 'execution_flow', 'error_handling'], dependencies: ['adr-suggestion', 'deployment-analysis', 'analysis'], loadOnDemand: false, // Core orchestration loaded at startup }, };
  • Input schema definition for the load_prompt tool, specifying parameters like promptName (with enum), section, and estimateOnly.
    inputSchema: { type: 'object', properties: { promptName: { type: 'string', description: 'Name of the prompt to load', enum: [ 'adr-suggestion', 'deployment-analysis', 'environment-analysis', 'research-question', 'rule-generation', 'analysis', 'research-integration', 'validated-pattern', 'security', ], }, section: { type: 'string', description: 'Specific section within the prompt to load', }, estimateOnly: { type: 'boolean', description: 'Return only token estimate without loading content', }, }, required: ['promptName'], },
  • Internal sandbox operation handler opLoadPrompt used by CE-MCP directives for lazy prompt loading within orchestrated workflows.
    private async opLoadPrompt( args: Record<string, unknown> | undefined, context: SandboxContext ): Promise<string> { const promptName = args?.['name'] as string; const section = args?.['section'] as string | undefined; if (!promptName) { throw new Error('loadPrompt requires "name" argument'); } // Check prompt cache with LRU tracking const cacheKey = `prompt:${promptName}:${section || 'full'}`; const cached = this.promptCache.get(cacheKey); if (cached && Date.now() < cached.expiry) { cached.lastAccess = Date.now(); this.cacheHits++; return cached.result; } this.cacheMisses++; if (cached) { this.promptCache.delete(cacheKey); } // Load prompt from file system // In full implementation, would use prompt catalog const promptPath = join(context.projectPath, 'src', 'prompts', `${promptName}.ts`); try { const content = await readFile(promptPath, 'utf-8'); // Evict old entries if cache is full this.evictOldestEntries(this.promptCache, MAX_PROMPT_CACHE_SIZE); // Cache the result with LRU tracking this.promptCache.set(cacheKey, { result: content, expiry: Date.now() + this.config.prompts.cacheTTL * 1000, lastAccess: Date.now(), }); return content; } catch { // Return placeholder if prompt not found return `[Prompt: ${promptName}${section ? `:${section}` : ''}]`; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server