Skip to main content
Glama

analyze_content_security

Detect sensitive information in code, documentation, logs, and configurations using AI-powered analysis with optional pattern learning and security expertise enhancement.

Instructions

Analyze content for sensitive information using AI-powered detection with optional memory integration for security pattern learning

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesContent to analyze for sensitive information
contentTypeNoType of content being analyzedgeneral
userDefinedPatternsNoUser-defined sensitive patterns to detect
enableMemoryIntegrationNoEnable memory entity storage for security pattern learning and institutional knowledge building
knowledgeEnhancementNoEnable Generated Knowledge Prompting for security and privacy expertise
enhancedModeNoEnable advanced prompting features

Implementation Reference

  • The core handler function that implements the analyze_content_security tool. Performs AI-powered security analysis on provided content, integrates tree-sitter parsing, generated knowledge prompting (GKP), memory storage for security patterns, and returns comprehensive analysis results.
    export async function analyzeContentSecurity(args: {
      content: string;
      contentType?: 'code' | 'documentation' | 'configuration' | 'logs' | 'general';
      userDefinedPatterns?: string[];
      knowledgeEnhancement?: boolean; // Enable GKP for security and privacy knowledge
      enhancedMode?: boolean; // Enable advanced prompting features
      enableMemoryIntegration?: boolean; // Enable memory entity storage
      enableTreeSitterAnalysis?: boolean; // Enable tree-sitter for enhanced code analysis
    }): Promise<any> {
      const {
        content,
        contentType = 'general',
        userDefinedPatterns,
        knowledgeEnhancement = getKnowledgeEnhancementDefault(), // Environment-aware default
        enhancedMode = getEnhancedModeDefault(), // Environment-aware default
        enableMemoryIntegration = getMemoryIntegrationDefault(), // Environment-aware default
        enableTreeSitterAnalysis = true, // Default to tree-sitter enabled
      } = args;
    
      try {
        const { analyzeSensitiveContent } = await import('../utils/content-masking.js');
    
        if (!content || content.trim().length === 0) {
          throw new McpAdrError('Content is required for security analysis', 'INVALID_INPUT');
        }
    
        // Initialize memory manager if enabled
        let securityMemoryManager: SecurityMemoryManager | null = null;
        if (enableMemoryIntegration) {
          securityMemoryManager = new SecurityMemoryManager();
          await securityMemoryManager.initialize();
        }
    
        // Perform tree-sitter analysis for enhanced security detection
        const treeSitterFindings: any[] = [];
        let treeSitterContext = '';
        if (enableTreeSitterAnalysis && contentType === 'code') {
          try {
            const analyzer = new TreeSitterAnalyzer();
    
            // Create a temporary file to analyze the content
            const { writeFileSync, unlinkSync } = await import('fs');
            const { join } = await import('path');
            const { tmpdir } = await import('os');
    
            // Determine file extension based on content patterns
            let extension = '.txt';
            if (
              content.includes('import ') ||
              content.includes('export ') ||
              content.includes('function ')
            ) {
              extension =
                content.includes('interface ') || content.includes(': string') ? '.ts' : '.js';
            } else if (content.includes('def ') || content.includes('import ')) {
              extension = '.py';
            } else if (content.includes('apiVersion:') || content.includes('kind:')) {
              extension = '.yaml';
            } else if (content.includes('resource ') || content.includes('provider ')) {
              extension = '.tf';
            }
    
            const tempFile = join(tmpdir(), `content-analysis-${Date.now()}${extension}`);
            writeFileSync(tempFile, content);
    
            try {
              const analysis = await analyzer.analyzeFile(tempFile);
    
              // Extract security-relevant findings
              if (analysis.hasSecrets && analysis.secrets.length > 0) {
                analysis.secrets.forEach(secret => {
                  treeSitterFindings.push({
                    type: 'secret',
                    category: secret.type,
                    content: secret.value,
                    confidence: secret.confidence,
                    severity:
                      secret.confidence > 0.8 ? 'high' : secret.confidence > 0.6 ? 'medium' : 'low',
                    location: secret.location,
                    context: secret.context,
                    source: 'tree-sitter',
                  });
                });
              }
    
              // Security issues
              if (analysis.securityIssues && analysis.securityIssues.length > 0) {
                analysis.securityIssues.forEach(issue => {
                  treeSitterFindings.push({
                    type: 'security_issue',
                    category: issue.type,
                    content: issue.message,
                    confidence: 0.9,
                    severity: issue.severity,
                    location: issue.location,
                    context: issue.suggestion,
                    source: 'tree-sitter',
                  });
                });
              }
    
              // Dangerous imports
              if (analysis.imports) {
                analysis.imports.forEach(imp => {
                  if (imp.isDangerous) {
                    treeSitterFindings.push({
                      type: 'dangerous_import',
                      category: 'import',
                      content: imp.module,
                      confidence: 0.8,
                      severity: 'medium',
                      location: imp.location,
                      context: imp.reason || 'Potentially dangerous import detected',
                      source: 'tree-sitter',
                    });
                  }
                });
              }
    
              if (treeSitterFindings.length > 0) {
                treeSitterContext = `\n## 🔍 Tree-sitter Enhanced Analysis\n\n**Detected ${treeSitterFindings.length} security findings:**\n${treeSitterFindings.map(f => `- **${f.type}**: ${f.content} (${f.severity} confidence)`).join('\n')}\n\n---\n`;
              }
            } finally {
              // Clean up temp file
              try {
                unlinkSync(tempFile);
              } catch {
                // Ignore cleanup errors
              }
            }
          } catch (error) {
            console.warn('Tree-sitter analysis failed, continuing with standard analysis:', error);
          }
        }
    
        let enhancedPrompt = '';
        let knowledgeContext = '';
    
        // Generate security and privacy knowledge if enabled
        if (enhancedMode && knowledgeEnhancement) {
          try {
            const { generateArchitecturalKnowledge } = await import('../utils/knowledge-generation.js');
            const knowledgeResult = await generateArchitecturalKnowledge(
              {
                projectPath: process.cwd(),
                technologies: [],
                patterns: [],
                projectType: 'security-content-analysis',
              },
              {
                domains: ['security-patterns'],
                depth: 'intermediate',
                cacheEnabled: true,
              }
            );
    
            knowledgeContext = `\n## Security & Privacy Knowledge Enhancement\n\n${knowledgeResult.prompt}\n\n---\n`;
          } catch (error) {
            console.error(
              '[WARNING] GKP knowledge generation failed for content security analysis:',
              error
            );
            knowledgeContext = '<!-- Security knowledge generation unavailable -->\n';
          }
        }
    
        const result = await analyzeSensitiveContent(content, contentType, userDefinedPatterns);
        enhancedPrompt = knowledgeContext + result.analysisPrompt;
    
        // Execute the security analysis with AI if enabled, otherwise return prompt
        const { executePromptWithFallback, formatMCPResponse } =
          await import('../utils/prompt-execution.js');
        const executionResult = await executePromptWithFallback(enhancedPrompt, result.instructions, {
          temperature: 0.1,
          maxTokens: 4000,
          systemPrompt: `You are a cybersecurity expert specializing in sensitive information detection.
    Analyze the provided content to identify potential security risks, secrets, and sensitive data.
    Leverage the provided cybersecurity and data privacy knowledge to create comprehensive, industry-standard analysis.
    Provide detailed findings with confidence scores and practical remediation recommendations.
    Consider regulatory compliance requirements, data classification standards, and modern security practices.
    Focus on actionable security insights that can prevent data exposure and ensure compliance.`,
          responseFormat: 'text',
        });
    
        if (executionResult.isAIGenerated) {
          // Memory integration: store security patterns and analysis results
          let memoryIntegrationInfo = '';
          if (securityMemoryManager) {
            try {
              // Extract patterns from AI analysis (simplified parsing)
              const detectedPatterns = parseDetectedPatterns(
                executionResult.content,
                userDefinedPatterns
              );
              const maskingResults = {
                strategy: 'analysis-only',
                securityScore: calculateSecurityScore(detectedPatterns, content),
                successRate: 1.0,
                preservedContext: 1.0, // Analysis doesn't mask, so context is preserved
                complianceLevel: 'analysis-complete',
              };
    
              // Store security pattern
              const patternId = await securityMemoryManager.storeSecurityPattern(
                contentType,
                detectedPatterns,
                maskingResults,
                {
                  contentLength: content.length,
                  method: 'ai-powered-analysis',
                  userDefinedPatterns: userDefinedPatterns?.length || 0,
                }
              );
    
              // Track evolution
              const evolution = await securityMemoryManager.trackMaskingEvolution(
                undefined,
                maskingResults
              );
    
              // Get institutional insights
              const institutionalAnalysis = await securityMemoryManager.analyzeInstitutionalSecurity();
    
              memoryIntegrationInfo = `
    
    ## 🧠 Security Memory Integration
    
    - **Pattern Stored**: ✅ Security analysis saved (ID: ${patternId.substring(0, 8)}...)
    - **Content Type**: ${contentType}
    - **Patterns Detected**: ${detectedPatterns.length}
    - **Security Score**: ${Math.round(maskingResults.securityScore * 100)}%
    
    ${
      evolution.improvements.length > 0
        ? `### Security Improvements
    ${evolution.improvements.map(improvement => `- ${improvement}`).join('\n')}
    `
        : ''
    }
    
    ${
      evolution.recommendations.length > 0
        ? `### Evolution Recommendations
    ${evolution.recommendations.map(rec => `- ${rec}`).join('\n')}
    `
        : ''
    }
    
    ${
      institutionalAnalysis.commonPatterns.length > 0
        ? `### Institutional Security Patterns
    ${institutionalAnalysis.commonPatterns
      .slice(0, 3)
      .map(pattern => `- **${pattern.type}**: ${pattern.frequency} occurrences`)
      .join('\n')}
    `
        : ''
    }
    
    ${
      institutionalAnalysis.complianceStatus
        ? `### Compliance Status
    - **GDPR**: ${institutionalAnalysis.complianceStatus.gdpr}
    - **HIPAA**: ${institutionalAnalysis.complianceStatus.hipaa}
    - **PCI**: ${institutionalAnalysis.complianceStatus.pci}
    `
        : ''
    }
    
    ${
      institutionalAnalysis.recommendations.length > 0
        ? `### Security Recommendations
    ${institutionalAnalysis.recommendations
      .slice(0, 3)
      .map(rec => `- ${rec}`)
      .join('\n')}
    `
        : ''
    }
    `;
            } catch (memoryError) {
              memoryIntegrationInfo = `
    
    ## 🧠 Security Memory Integration Status
    
    - **Status**: ⚠️ Memory integration failed - analysis completed without persistence
    - **Error**: ${memoryError instanceof Error ? memoryError.message : 'Unknown error'}
    `;
            }
          }
    
          // AI execution successful - return actual security analysis results
          return formatMCPResponse({
            ...executionResult,
            content: `# Content Security Analysis Results (GKP Enhanced)
    
    ## Enhancement Features
    - **Generated Knowledge Prompting**: ${knowledgeEnhancement ? '✅ Enabled' : '❌ Disabled'}
    - **Enhanced Mode**: ${enhancedMode ? '✅ Enabled' : '❌ Disabled'}
    - **Memory Integration**: ${enableMemoryIntegration ? '✅ Enabled' : '❌ Disabled'}
    - **Tree-sitter Analysis**: ${enableTreeSitterAnalysis ? '✅ Enabled' : '❌ Disabled'}
    - **Knowledge Domains**: Cybersecurity, data privacy, regulatory compliance, secret management
    
    ## Analysis Information
    - **Content Type**: ${contentType}
    
    ${
      knowledgeContext
        ? `## Applied Security Knowledge
    
    ${knowledgeContext}
    `
        : ''
    }
    ${treeSitterContext}
    - **Content Length**: ${content.length} characters
    - **User-Defined Patterns**: ${userDefinedPatterns?.length || 0} patterns
    
    ## AI Security Analysis
    
    ${executionResult.content}
    
    ${memoryIntegrationInfo}
    
    ## Next Steps
    
    Based on the security analysis:
    
    1. **Review Identified Issues**: Examine each flagged item for actual sensitivity
    2. **Apply Recommended Masking**: Use suggested masking strategies for sensitive content
    3. **Update Security Policies**: Incorporate findings into security guidelines
    4. **Implement Monitoring**: Set up detection for similar patterns in the future
    5. **Train Team**: Share findings to improve security awareness
    
    ## Remediation Commands
    
    To apply masking to identified sensitive content, use the \`generate_content_masking\` tool with the detected items.
    `,
          });
        } else {
          // Fallback to prompt-only mode
          return {
            content: [
              {
                type: 'text',
                text: `# Sensitive Content Analysis (GKP Enhanced)\n\n## Enhancement Status\n- **Generated Knowledge Prompting**: ${knowledgeEnhancement ? '\u2705 Applied' : '\u274c Disabled'}\n- **Enhanced Mode**: ${enhancedMode ? '\u2705 Applied' : '\u274c Disabled'}\n\n${knowledgeContext ? `## Security Knowledge Context\n\n${knowledgeContext}\n` : ''}\n\n${result.instructions}\n\n## Enhanced AI Analysis Prompt\n\n${enhancedPrompt}`,
              },
            ],
          };
        }
      } catch (error) {
        throw new McpAdrError(
          `Failed to analyze content security: ${error instanceof Error ? error.message : String(error)}`,
          'ANALYSIS_ERROR'
        );
      }
    }
  • Input schema/type definition for the analyze_content_security tool arguments, defining the expected parameters including content, type, patterns, and feature flags.
    export interface AnalyzeContentSecurityArgs {
      content: string;
      contentType?: 'code' | 'documentation' | 'configuration' | 'logs' | 'general';
      userDefinedPatterns?: string[];
      knowledgeEnhancement?: boolean;
      enhancedMode?: boolean;
      enableMemoryIntegration?: boolean;
      enableTreeSitterAnalysis?: boolean;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions 'AI-powered detection' and 'optional memory integration', it fails to disclose critical behavioral traits: what permissions are required, whether the analysis is destructive or read-only, what happens to the analyzed content, rate limits, error conditions, or what the output looks like. For a security analysis tool with sensitive data implications, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise at one sentence that efficiently communicates the core functionality. It's front-loaded with the main purpose and includes the key optional feature. There's no wasted language, though it could potentially benefit from slightly more structure for a tool with 6 parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a security analysis tool with 6 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain what constitutes 'sensitive information', what the analysis output includes, how confidence levels are reported, or what happens when patterns are detected. The absence of output schema means the description should ideally cover return values, but it doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal parameter semantics beyond what the schema already provides. With 100% schema description coverage, all 6 parameters are well-documented in the schema itself. The description mentions 'optional memory integration' which corresponds to the 'enableMemoryIntegration' parameter, but this is already clear from the schema. No additional context about parameter interactions or practical usage is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze content for sensitive information using AI-powered detection'. It specifies the verb ('analyze'), resource ('content'), and method ('AI-powered detection'). However, it doesn't explicitly differentiate from sibling tools like 'apply_basic_content_masking' or 'validate_content_masking', which appear related to content security but serve different functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'optional memory integration for security pattern learning' but doesn't explain when this feature should be enabled or disabled, nor does it reference any sibling tools for comparison. There's no discussion of prerequisites, limitations, or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server