audit_prompts
Analyze codebase prompts to categorize them as user-facing (needs security badges) or internal (audit only), helping identify which prompts require registration.
Instructions
Analyze a list of prompts found in a codebase and categorize them as user-facing (needs badge) or internal (audit only). This tool helps users who already have prompts in their codebase understand which ones should be registered with secure badges vs which are internal-only.
HOW TO USE:
First, search the codebase for prompts using patterns like:
Files matching: public/PROMPT_*.txt, */prompt.ts
Code patterns: 'You are a', 'systemPrompt', 'SYSTEM_PROMPT', role: 'system'
Extract the prompt text and file location for each found prompt
Call this tool with the prompts array
Present the audit results to the user, showing:
User-facing prompts that should get security badges
Internal prompts that are safe but should be audited
Prompts needing manual review
Ask the user which prompts they want to register for badges
Use register_secure_prompt for each selected prompt
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompts | Yes | Array of prompts found in the codebase |
Implementation Reference
- src/index.ts:221-298 (handler)The main handler function auditPrompts that takes an array of prompts found in the codebase and categorizes them as user-facing (recommend registering with badges), internal (audit only), or needing review based on file paths, surrounding code patterns, and confidence scores. Returns structured AuditResult.async function auditPrompts(args: { prompts: Array<{ filePath: string; lineNumber: number; promptText: string; surroundingCode?: string; }>; }): Promise<AuditResult> { const candidates: PromptCandidate[] = []; for (const prompt of args.prompts) { const surrounding = prompt.surroundingCode || ''; const filePath = prompt.filePath; // Determine context based on file path and surrounding code let context: 'user_facing' | 'internal' | 'unknown' = 'unknown'; let confidence = 50; // Check file path patterns const isPublicFile = /public\/|PROMPT_.*\.txt/i.test(filePath); const isApiFile = /api\/|\.server\.|route\.ts/i.test(filePath); const isComponentFile = /components?\/|\.tsx$/i.test(filePath); // Check surrounding code patterns const hasUserFacingIndicators = USER_FACING_PATTERNS.some(p => p.test(surrounding) || p.test(filePath)); const hasInternalIndicators = INTERNAL_PATTERNS.some(p => p.test(surrounding) || p.test(filePath)); // Determine context if (isPublicFile || hasUserFacingIndicators) { context = 'user_facing'; confidence = isPublicFile ? 95 : 75; } else if (isApiFile || hasInternalIndicators) { context = 'internal'; confidence = isApiFile ? 90 : 70; } else if (isComponentFile) { // Components could be either - need review context = hasUserFacingIndicators ? 'user_facing' : 'unknown'; confidence = 60; } // Determine suggested action let suggestedAction: 'register_badge' | 'audit_only' | 'review' = 'review'; if (context === 'user_facing' && confidence >= 70) { suggestedAction = 'register_badge'; } else if (context === 'internal' && confidence >= 70) { suggestedAction = 'audit_only'; } // Create preview (first 100 chars) const preview = prompt.promptText.substring(0, 100) + (prompt.promptText.length > 100 ? '...' : ''); candidates.push({ filePath: prompt.filePath, lineNumber: prompt.lineNumber, promptText: prompt.promptText, context, confidence, suggestedAction, preview, }); } // Calculate summary stats const userFacing = candidates.filter(c => c.context === 'user_facing').length; const internal = candidates.filter(c => c.context === 'internal').length; const needsReview = candidates.filter(c => c.context === 'unknown' || c.confidence < 70).length; const summary = `Found ${candidates.length} prompts: ${userFacing} user-facing (recommend badges), ${internal} internal (audit only), ${needsReview} need manual review.`; return { totalFound: candidates.length, userFacing, internal, needsReview, prompts: candidates, summary, }; }
- src/index.ts:473-504 (schema)Input schema for the audit_prompts tool, defining the expected structure: an object with a 'prompts' array, each item having filePath, lineNumber, promptText, and optional surroundingCode.inputSchema: { type: "object", properties: { prompts: { type: "array", description: "Array of prompts found in the codebase", items: { type: "object", properties: { filePath: { type: "string", description: "Path to the file containing the prompt", }, lineNumber: { type: "number", description: "Line number where the prompt starts", }, promptText: { type: "string", description: "The full prompt text", }, surroundingCode: { type: "string", description: "Optional: Code around the prompt (helps determine if user-facing)", }, }, required: ["filePath", "lineNumber", "promptText"], }, }, }, required: ["prompts"], },
- src/index.ts:455-505 (registration)Registration of the 'audit_prompts' tool in the MCP server's list of tools, including name, detailed description on usage, and reference to inputSchema.{ name: "audit_prompts", description: "Analyze a list of prompts found in a codebase and categorize them as user-facing (needs badge) " + "or internal (audit only). This tool helps users who already have prompts in their codebase " + "understand which ones should be registered with secure badges vs which are internal-only.\n\n" + "HOW TO USE:\n" + "1. First, search the codebase for prompts using patterns like:\n" + " - Files matching: public/PROMPT_*.txt, **/prompt*.ts\n" + " - Code patterns: 'You are a', 'systemPrompt', 'SYSTEM_PROMPT', role: 'system'\n" + "2. Extract the prompt text and file location for each found prompt\n" + "3. Call this tool with the prompts array\n" + "4. Present the audit results to the user, showing:\n" + " - User-facing prompts that should get security badges\n" + " - Internal prompts that are safe but should be audited\n" + " - Prompts needing manual review\n" + "5. Ask the user which prompts they want to register for badges\n" + "6. Use register_secure_prompt for each selected prompt", inputSchema: { type: "object", properties: { prompts: { type: "array", description: "Array of prompts found in the codebase", items: { type: "object", properties: { filePath: { type: "string", description: "Path to the file containing the prompt", }, lineNumber: { type: "number", description: "Line number where the prompt starts", }, promptText: { type: "string", description: "The full prompt text", }, surroundingCode: { type: "string", description: "Optional: Code around the prompt (helps determine if user-facing)", }, }, required: ["filePath", "lineNumber", "promptText"], }, }, }, required: ["prompts"], }, },
- src/index.ts:573-647 (handler)The MCP tool call handler case for 'audit_prompts' that validates input, calls the auditPrompts function, adds guidance for next steps, and formats the response.case "audit_prompts": { const typedArgs = args as { prompts: Array<{ filePath: string; lineNumber: number; promptText: string; surroundingCode?: string; }>; }; if (!typedArgs.prompts || !Array.isArray(typedArgs.prompts)) { throw new McpError(ErrorCode.InvalidParams, "prompts array is required"); } if (typedArgs.prompts.length === 0) { return { content: [ { type: "text", text: JSON.stringify({ totalFound: 0, userFacing: 0, internal: 0, needsReview: 0, prompts: [], summary: "No prompts provided for analysis.", }, null, 2), }, ], }; } const result = await auditPrompts(typedArgs); // Add guidance for the AI agent const guidance = ` ## Prompt Audit Results ${result.summary} ### Recommended Actions: **User-Facing Prompts (${result.userFacing}):** These prompts are likely displayed to users or have copy buttons. Consider registering them with secure badges: ${result.prompts.filter(p => p.context === 'user_facing').map(p => `- ${p.filePath}:${p.lineNumber} - "${p.preview}" (${p.confidence}% confidence)` ).join('\n') || '- None found'} **Internal Prompts (${result.internal}):** These appear to be backend/API prompts. They should be secure but don't need public badges: ${result.prompts.filter(p => p.context === 'internal').map(p => `- ${p.filePath}:${p.lineNumber} - "${p.preview}"` ).join('\n') || '- None found'} **Needs Review (${result.needsReview}):** These prompts need manual review to determine if they're user-facing: ${result.prompts.filter(p => p.context === 'unknown' || p.confidence < 70).map(p => `- ${p.filePath}:${p.lineNumber} - "${p.preview}" (${p.confidence}% confidence)` ).join('\n') || '- None found'} ### Next Steps: 1. Ask the user: "Would you like to register any of these prompts with security badges?" 2. For user-facing prompts, use register_secure_prompt 3. Show them the badge options (full badge, compact link, icon button) `; return { content: [ { type: "text", text: JSON.stringify({ ...result, guidance }, null, 2), }, ], }; }
- src/index.ts:182-197 (helper)PROMPT_PATTERNS regex array used by the handler to identify potential prompts in code (though patterns are defined but used in instructions, not directly in handler). Additional USER_FACING_PATTERNS (200-207) and INTERNAL_PATTERNS (210-215) are used in categorization.const PROMPT_PATTERNS = [ // System prompts /system\s*:\s*[`"']([^`"']{50,})[`"']/gi, /systemPrompt\s*[:=]\s*[`"']([^`"']{50,})[`"']/gi, /SYSTEM_PROMPT\s*=\s*[`"']([^`"']{50,})[`"']/gi, // Role definitions /["'`]You are (?:a |an )?[^"'`]{30,}["'`]/gi, /role\s*:\s*["'`]system["'`][\s\S]{0,50}content\s*:\s*["'`]([^"'`]{50,})["'`]/gi, // Prompt variables /(?:const|let|var)\s+\w*[Pp]rompt\w*\s*=\s*[`"']([^`"']{50,})[`"']/gi, // Template literals with instructions /`[^`]*(?:instructions?|guidelines?|rules?)[^`]*`/gi, ];