Skip to main content
Glama

analyze_content_security

Detect sensitive information in code, documentation, logs, and configurations using AI-powered analysis with optional pattern learning and security expertise enhancement.

Instructions

Analyze content for sensitive information using AI-powered detection with optional memory integration for security pattern learning

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesContent to analyze for sensitive information
contentTypeNoType of content being analyzedgeneral
userDefinedPatternsNoUser-defined sensitive patterns to detect
enableMemoryIntegrationNoEnable memory entity storage for security pattern learning and institutional knowledge building
knowledgeEnhancementNoEnable Generated Knowledge Prompting for security and privacy expertise
enhancedModeNoEnable advanced prompting features

Implementation Reference

  • The core handler function that implements the analyze_content_security tool. Performs AI-powered security analysis on provided content, integrates tree-sitter parsing, generated knowledge prompting (GKP), memory storage for security patterns, and returns comprehensive analysis results.
    export async function analyzeContentSecurity(args: { content: string; contentType?: 'code' | 'documentation' | 'configuration' | 'logs' | 'general'; userDefinedPatterns?: string[]; knowledgeEnhancement?: boolean; // Enable GKP for security and privacy knowledge enhancedMode?: boolean; // Enable advanced prompting features enableMemoryIntegration?: boolean; // Enable memory entity storage enableTreeSitterAnalysis?: boolean; // Enable tree-sitter for enhanced code analysis }): Promise<any> { const { content, contentType = 'general', userDefinedPatterns, knowledgeEnhancement = getKnowledgeEnhancementDefault(), // Environment-aware default enhancedMode = getEnhancedModeDefault(), // Environment-aware default enableMemoryIntegration = getMemoryIntegrationDefault(), // Environment-aware default enableTreeSitterAnalysis = true, // Default to tree-sitter enabled } = args; try { const { analyzeSensitiveContent } = await import('../utils/content-masking.js'); if (!content || content.trim().length === 0) { throw new McpAdrError('Content is required for security analysis', 'INVALID_INPUT'); } // Initialize memory manager if enabled let securityMemoryManager: SecurityMemoryManager | null = null; if (enableMemoryIntegration) { securityMemoryManager = new SecurityMemoryManager(); await securityMemoryManager.initialize(); } // Perform tree-sitter analysis for enhanced security detection const treeSitterFindings: any[] = []; let treeSitterContext = ''; if (enableTreeSitterAnalysis && contentType === 'code') { try { const analyzer = new TreeSitterAnalyzer(); // Create a temporary file to analyze the content const { writeFileSync, unlinkSync } = await import('fs'); const { join } = await import('path'); const { tmpdir } = await import('os'); // Determine file extension based on content patterns let extension = '.txt'; if ( content.includes('import ') || content.includes('export ') || content.includes('function ') ) { extension = content.includes('interface ') || content.includes(': string') ? '.ts' : '.js'; } else if (content.includes('def ') || content.includes('import ')) { extension = '.py'; } else if (content.includes('apiVersion:') || content.includes('kind:')) { extension = '.yaml'; } else if (content.includes('resource ') || content.includes('provider ')) { extension = '.tf'; } const tempFile = join(tmpdir(), `content-analysis-${Date.now()}${extension}`); writeFileSync(tempFile, content); try { const analysis = await analyzer.analyzeFile(tempFile); // Extract security-relevant findings if (analysis.hasSecrets && analysis.secrets.length > 0) { analysis.secrets.forEach(secret => { treeSitterFindings.push({ type: 'secret', category: secret.type, content: secret.value, confidence: secret.confidence, severity: secret.confidence > 0.8 ? 'high' : secret.confidence > 0.6 ? 'medium' : 'low', location: secret.location, context: secret.context, source: 'tree-sitter', }); }); } // Security issues if (analysis.securityIssues && analysis.securityIssues.length > 0) { analysis.securityIssues.forEach(issue => { treeSitterFindings.push({ type: 'security_issue', category: issue.type, content: issue.message, confidence: 0.9, severity: issue.severity, location: issue.location, context: issue.suggestion, source: 'tree-sitter', }); }); } // Dangerous imports if (analysis.imports) { analysis.imports.forEach(imp => { if (imp.isDangerous) { treeSitterFindings.push({ type: 'dangerous_import', category: 'import', content: imp.module, confidence: 0.8, severity: 'medium', location: imp.location, context: imp.reason || 'Potentially dangerous import detected', source: 'tree-sitter', }); } }); } if (treeSitterFindings.length > 0) { treeSitterContext = `\n## ๐Ÿ” Tree-sitter Enhanced Analysis\n\n**Detected ${treeSitterFindings.length} security findings:**\n${treeSitterFindings.map(f => `- **${f.type}**: ${f.content} (${f.severity} confidence)`).join('\n')}\n\n---\n`; } } finally { // Clean up temp file try { unlinkSync(tempFile); } catch { // Ignore cleanup errors } } } catch (error) { console.warn('Tree-sitter analysis failed, continuing with standard analysis:', error); } } let enhancedPrompt = ''; let knowledgeContext = ''; // Generate security and privacy knowledge if enabled if (enhancedMode && knowledgeEnhancement) { try { const { generateArchitecturalKnowledge } = await import('../utils/knowledge-generation.js'); const knowledgeResult = await generateArchitecturalKnowledge( { projectPath: process.cwd(), technologies: [], patterns: [], projectType: 'security-content-analysis', }, { domains: ['security-patterns'], depth: 'intermediate', cacheEnabled: true, } ); knowledgeContext = `\n## Security & Privacy Knowledge Enhancement\n\n${knowledgeResult.prompt}\n\n---\n`; } catch (error) { console.error( '[WARNING] GKP knowledge generation failed for content security analysis:', error ); knowledgeContext = '<!-- Security knowledge generation unavailable -->\n'; } } const result = await analyzeSensitiveContent(content, contentType, userDefinedPatterns); enhancedPrompt = knowledgeContext + result.analysisPrompt; // Execute the security analysis with AI if enabled, otherwise return prompt const { executePromptWithFallback, formatMCPResponse } = await import('../utils/prompt-execution.js'); const executionResult = await executePromptWithFallback(enhancedPrompt, result.instructions, { temperature: 0.1, maxTokens: 4000, systemPrompt: `You are a cybersecurity expert specializing in sensitive information detection. Analyze the provided content to identify potential security risks, secrets, and sensitive data. Leverage the provided cybersecurity and data privacy knowledge to create comprehensive, industry-standard analysis. Provide detailed findings with confidence scores and practical remediation recommendations. Consider regulatory compliance requirements, data classification standards, and modern security practices. Focus on actionable security insights that can prevent data exposure and ensure compliance.`, responseFormat: 'text', }); if (executionResult.isAIGenerated) { // Memory integration: store security patterns and analysis results let memoryIntegrationInfo = ''; if (securityMemoryManager) { try { // Extract patterns from AI analysis (simplified parsing) const detectedPatterns = parseDetectedPatterns( executionResult.content, userDefinedPatterns ); const maskingResults = { strategy: 'analysis-only', securityScore: calculateSecurityScore(detectedPatterns, content), successRate: 1.0, preservedContext: 1.0, // Analysis doesn't mask, so context is preserved complianceLevel: 'analysis-complete', }; // Store security pattern const patternId = await securityMemoryManager.storeSecurityPattern( contentType, detectedPatterns, maskingResults, { contentLength: content.length, method: 'ai-powered-analysis', userDefinedPatterns: userDefinedPatterns?.length || 0, } ); // Track evolution const evolution = await securityMemoryManager.trackMaskingEvolution( undefined, maskingResults ); // Get institutional insights const institutionalAnalysis = await securityMemoryManager.analyzeInstitutionalSecurity(); memoryIntegrationInfo = ` ## ๐Ÿง  Security Memory Integration - **Pattern Stored**: โœ… Security analysis saved (ID: ${patternId.substring(0, 8)}...) - **Content Type**: ${contentType} - **Patterns Detected**: ${detectedPatterns.length} - **Security Score**: ${Math.round(maskingResults.securityScore * 100)}% ${ evolution.improvements.length > 0 ? `### Security Improvements ${evolution.improvements.map(improvement => `- ${improvement}`).join('\n')} ` : '' } ${ evolution.recommendations.length > 0 ? `### Evolution Recommendations ${evolution.recommendations.map(rec => `- ${rec}`).join('\n')} ` : '' } ${ institutionalAnalysis.commonPatterns.length > 0 ? `### Institutional Security Patterns ${institutionalAnalysis.commonPatterns .slice(0, 3) .map(pattern => `- **${pattern.type}**: ${pattern.frequency} occurrences`) .join('\n')} ` : '' } ${ institutionalAnalysis.complianceStatus ? `### Compliance Status - **GDPR**: ${institutionalAnalysis.complianceStatus.gdpr} - **HIPAA**: ${institutionalAnalysis.complianceStatus.hipaa} - **PCI**: ${institutionalAnalysis.complianceStatus.pci} ` : '' } ${ institutionalAnalysis.recommendations.length > 0 ? `### Security Recommendations ${institutionalAnalysis.recommendations .slice(0, 3) .map(rec => `- ${rec}`) .join('\n')} ` : '' } `; } catch (memoryError) { memoryIntegrationInfo = ` ## ๐Ÿง  Security Memory Integration Status - **Status**: โš ๏ธ Memory integration failed - analysis completed without persistence - **Error**: ${memoryError instanceof Error ? memoryError.message : 'Unknown error'} `; } } // AI execution successful - return actual security analysis results return formatMCPResponse({ ...executionResult, content: `# Content Security Analysis Results (GKP Enhanced) ## Enhancement Features - **Generated Knowledge Prompting**: ${knowledgeEnhancement ? 'โœ… Enabled' : 'โŒ Disabled'} - **Enhanced Mode**: ${enhancedMode ? 'โœ… Enabled' : 'โŒ Disabled'} - **Memory Integration**: ${enableMemoryIntegration ? 'โœ… Enabled' : 'โŒ Disabled'} - **Tree-sitter Analysis**: ${enableTreeSitterAnalysis ? 'โœ… Enabled' : 'โŒ Disabled'} - **Knowledge Domains**: Cybersecurity, data privacy, regulatory compliance, secret management ## Analysis Information - **Content Type**: ${contentType} ${ knowledgeContext ? `## Applied Security Knowledge ${knowledgeContext} ` : '' } ${treeSitterContext} - **Content Length**: ${content.length} characters - **User-Defined Patterns**: ${userDefinedPatterns?.length || 0} patterns ## AI Security Analysis ${executionResult.content} ${memoryIntegrationInfo} ## Next Steps Based on the security analysis: 1. **Review Identified Issues**: Examine each flagged item for actual sensitivity 2. **Apply Recommended Masking**: Use suggested masking strategies for sensitive content 3. **Update Security Policies**: Incorporate findings into security guidelines 4. **Implement Monitoring**: Set up detection for similar patterns in the future 5. **Train Team**: Share findings to improve security awareness ## Remediation Commands To apply masking to identified sensitive content, use the \`generate_content_masking\` tool with the detected items. `, }); } else { // Fallback to prompt-only mode return { content: [ { type: 'text', text: `# Sensitive Content Analysis (GKP Enhanced)\n\n## Enhancement Status\n- **Generated Knowledge Prompting**: ${knowledgeEnhancement ? '\u2705 Applied' : '\u274c Disabled'}\n- **Enhanced Mode**: ${enhancedMode ? '\u2705 Applied' : '\u274c Disabled'}\n\n${knowledgeContext ? `## Security Knowledge Context\n\n${knowledgeContext}\n` : ''}\n\n${result.instructions}\n\n## Enhanced AI Analysis Prompt\n\n${enhancedPrompt}`, }, ], }; } } catch (error) { throw new McpAdrError( `Failed to analyze content security: ${error instanceof Error ? error.message : String(error)}`, 'ANALYSIS_ERROR' ); } }
  • Input schema/type definition for the analyze_content_security tool arguments, defining the expected parameters including content, type, patterns, and feature flags.
    export interface AnalyzeContentSecurityArgs { content: string; contentType?: 'code' | 'documentation' | 'configuration' | 'logs' | 'general'; userDefinedPatterns?: string[]; knowledgeEnhancement?: boolean; enhancedMode?: boolean; enableMemoryIntegration?: boolean; enableTreeSitterAnalysis?: boolean; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server