audit_prompts
Analyze prompts in your codebase to identify user-facing prompts requiring security badges and internal prompts needing audit review.
Instructions
Analyze a list of prompts found in a codebase and categorize them as user-facing (needs badge) or internal (audit only). This tool helps users who already have prompts in their codebase understand which ones should be registered with secure badges vs which are internal-only.
HOW TO USE:
First, search the codebase for prompts using patterns like:
Files matching: public/PROMPT_*.txt, */prompt.ts
Code patterns: 'You are a', 'systemPrompt', 'SYSTEM_PROMPT', role: 'system'
Extract the prompt text and file location for each found prompt
Call this tool with the prompts array
Present the audit results to the user, showing:
User-facing prompts that should get security badges
Internal prompts that are safe but should be audited
Prompts needing manual review
Ask the user which prompts they want to register for badges
Use register_secure_prompt for each selected prompt
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompts | Yes | Array of prompts found in the codebase |
Implementation Reference
- src/index.ts:221-298 (handler)The core handler function that takes a list of detected prompts from the codebase, analyzes their context (user-facing vs internal) using patterns and file paths, assigns confidence scores and suggested actions, computes statistics, and returns a structured AuditResult with recommendations for registration.async function auditPrompts(args: { prompts: Array<{ filePath: string; lineNumber: number; promptText: string; surroundingCode?: string; }>; }): Promise<AuditResult> { const candidates: PromptCandidate[] = []; for (const prompt of args.prompts) { const surrounding = prompt.surroundingCode || ''; const filePath = prompt.filePath; // Determine context based on file path and surrounding code let context: 'user_facing' | 'internal' | 'unknown' = 'unknown'; let confidence = 50; // Check file path patterns const isPublicFile = /public\/|PROMPT_.*\.txt/i.test(filePath); const isApiFile = /api\/|\.server\.|route\.ts/i.test(filePath); const isComponentFile = /components?\/|\.tsx$/i.test(filePath); // Check surrounding code patterns const hasUserFacingIndicators = USER_FACING_PATTERNS.some(p => p.test(surrounding) || p.test(filePath)); const hasInternalIndicators = INTERNAL_PATTERNS.some(p => p.test(surrounding) || p.test(filePath)); // Determine context if (isPublicFile || hasUserFacingIndicators) { context = 'user_facing'; confidence = isPublicFile ? 95 : 75; } else if (isApiFile || hasInternalIndicators) { context = 'internal'; confidence = isApiFile ? 90 : 70; } else if (isComponentFile) { // Components could be either - need review context = hasUserFacingIndicators ? 'user_facing' : 'unknown'; confidence = 60; } // Determine suggested action let suggestedAction: 'register_badge' | 'audit_only' | 'review' = 'review'; if (context === 'user_facing' && confidence >= 70) { suggestedAction = 'register_badge'; } else if (context === 'internal' && confidence >= 70) { suggestedAction = 'audit_only'; } // Create preview (first 100 chars) const preview = prompt.promptText.substring(0, 100) + (prompt.promptText.length > 100 ? '...' : ''); candidates.push({ filePath: prompt.filePath, lineNumber: prompt.lineNumber, promptText: prompt.promptText, context, confidence, suggestedAction, preview, }); } // Calculate summary stats const userFacing = candidates.filter(c => c.context === 'user_facing').length; const internal = candidates.filter(c => c.context === 'internal').length; const needsReview = candidates.filter(c => c.context === 'unknown' || c.confidence < 70).length; const summary = `Found ${candidates.length} prompts: ${userFacing} user-facing (recommend badges), ${internal} internal (audit only), ${needsReview} need manual review.`; return { totalFound: candidates.length, userFacing, internal, needsReview, prompts: candidates, summary, }; }
- src/index.ts:473-504 (schema)JSON schema defining the input parameters for the audit_prompts tool: requires a 'prompts' array, each item with filePath, lineNumber, promptText, and optional surroundingCode.inputSchema: { type: "object", properties: { prompts: { type: "array", description: "Array of prompts found in the codebase", items: { type: "object", properties: { filePath: { type: "string", description: "Path to the file containing the prompt", }, lineNumber: { type: "number", description: "Line number where the prompt starts", }, promptText: { type: "string", description: "The full prompt text", }, surroundingCode: { type: "string", description: "Optional: Code around the prompt (helps determine if user-facing)", }, }, required: ["filePath", "lineNumber", "promptText"], }, }, }, required: ["prompts"], },
- src/index.ts:455-505 (registration)Registers the audit_prompts tool in the MCP server's listTools response, including name, detailed description on usage for codebase scanning, and input schema.{ name: "audit_prompts", description: "Analyze a list of prompts found in a codebase and categorize them as user-facing (needs badge) " + "or internal (audit only). This tool helps users who already have prompts in their codebase " + "understand which ones should be registered with secure badges vs which are internal-only.\n\n" + "HOW TO USE:\n" + "1. First, search the codebase for prompts using patterns like:\n" + " - Files matching: public/PROMPT_*.txt, **/prompt*.ts\n" + " - Code patterns: 'You are a', 'systemPrompt', 'SYSTEM_PROMPT', role: 'system'\n" + "2. Extract the prompt text and file location for each found prompt\n" + "3. Call this tool with the prompts array\n" + "4. Present the audit results to the user, showing:\n" + " - User-facing prompts that should get security badges\n" + " - Internal prompts that are safe but should be audited\n" + " - Prompts needing manual review\n" + "5. Ask the user which prompts they want to register for badges\n" + "6. Use register_secure_prompt for each selected prompt", inputSchema: { type: "object", properties: { prompts: { type: "array", description: "Array of prompts found in the codebase", items: { type: "object", properties: { filePath: { type: "string", description: "Path to the file containing the prompt", }, lineNumber: { type: "number", description: "Line number where the prompt starts", }, promptText: { type: "string", description: "The full prompt text", }, surroundingCode: { type: "string", description: "Optional: Code around the prompt (helps determine if user-facing)", }, }, required: ["filePath", "lineNumber", "promptText"], }, }, }, required: ["prompts"], }, },
- src/index.ts:573-647 (registration)Handles incoming CallToolRequest for 'audit_prompts' in the MCP server: validates arguments, calls the auditPrompts handler, enhances result with guidance text, and returns formatted response.case "audit_prompts": { const typedArgs = args as { prompts: Array<{ filePath: string; lineNumber: number; promptText: string; surroundingCode?: string; }>; }; if (!typedArgs.prompts || !Array.isArray(typedArgs.prompts)) { throw new McpError(ErrorCode.InvalidParams, "prompts array is required"); } if (typedArgs.prompts.length === 0) { return { content: [ { type: "text", text: JSON.stringify({ totalFound: 0, userFacing: 0, internal: 0, needsReview: 0, prompts: [], summary: "No prompts provided for analysis.", }, null, 2), }, ], }; } const result = await auditPrompts(typedArgs); // Add guidance for the AI agent const guidance = ` ## Prompt Audit Results ${result.summary} ### Recommended Actions: **User-Facing Prompts (${result.userFacing}):** These prompts are likely displayed to users or have copy buttons. Consider registering them with secure badges: ${result.prompts.filter(p => p.context === 'user_facing').map(p => `- ${p.filePath}:${p.lineNumber} - "${p.preview}" (${p.confidence}% confidence)` ).join('\n') || '- None found'} **Internal Prompts (${result.internal}):** These appear to be backend/API prompts. They should be secure but don't need public badges: ${result.prompts.filter(p => p.context === 'internal').map(p => `- ${p.filePath}:${p.lineNumber} - "${p.preview}"` ).join('\n') || '- None found'} **Needs Review (${result.needsReview}):** These prompts need manual review to determine if they're user-facing: ${result.prompts.filter(p => p.context === 'unknown' || p.confidence < 70).map(p => `- ${p.filePath}:${p.lineNumber} - "${p.preview}" (${p.confidence}% confidence)` ).join('\n') || '- None found'} ### Next Steps: 1. Ask the user: "Would you like to register any of these prompts with security badges?" 2. For user-facing prompts, use register_secure_prompt 3. Show them the badge options (full badge, compact link, icon button) `; return { content: [ { type: "text", text: JSON.stringify({ ...result, guidance }, null, 2), }, ], }; }
- src/index.ts:182-215 (helper)Regular expression pattern arrays used by the handler to detect prompt-like strings and classify them as user-facing or internal based on surrounding code and file paths.const PROMPT_PATTERNS = [ // System prompts /system\s*:\s*[`"']([^`"']{50,})[`"']/gi, /systemPrompt\s*[:=]\s*[`"']([^`"']{50,})[`"']/gi, /SYSTEM_PROMPT\s*=\s*[`"']([^`"']{50,})[`"']/gi, // Role definitions /["'`]You are (?:a |an )?[^"'`]{30,}["'`]/gi, /role\s*:\s*["'`]system["'`][\s\S]{0,50}content\s*:\s*["'`]([^"'`]{50,})["'`]/gi, // Prompt variables /(?:const|let|var)\s+\w*[Pp]rompt\w*\s*=\s*[`"']([^`"']{50,})[`"']/gi, // Template literals with instructions /`[^`]*(?:instructions?|guidelines?|rules?)[^`]*`/gi, ]; // Patterns suggesting user-facing (copy button, displayed to user) const USER_FACING_PATTERNS = [ /copy.*button|copyable|clipboard/i, /onClick.*copy|handleCopy/i, /data-prompt|promptText.*prop/i, /user.*can.*copy|copy.*to.*clipboard/i, /<pre>|<code>|CodeBlock/i, /PROMPT_.*\.txt|public\/.*prompt/i, ]; // Patterns suggesting internal-only const INTERNAL_PATTERNS = [ /api\/|server|backend/i, /process\.env|getServerSide/i, /internal|private|system/i, /\.server\.|route\.ts|api\//i, ];