Skip to main content
Glama

generate_content_masking

Create masking instructions for sensitive content detected in text, using strategies like full, partial, placeholder, or environment-based masking to protect confidential information.

Instructions

Generate masking instructions for detected sensitive content

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesContent to mask
detectedItemsYesDetected sensitive items to mask
maskingStrategyNoStrategy for masking contentfull

Implementation Reference

  • Main handler function implementing the generate_content_masking MCP tool. Processes input content and detected sensitive items to generate intelligent masking using AI prompts and execution.
    export async function generateContentMasking(args: {
      content: string;
      detectedItems: Array<{
        type: string;
        category?: string;
        content: string;
        startPosition: number;
        endPosition: number;
        confidence?: number;
        reasoning?: string;
        severity: string;
        suggestedMask?: string;
      }>;
      maskingStrategy?: 'full' | 'partial' | 'placeholder' | 'environment';
      knowledgeEnhancement?: boolean; // Enable GKP for security and privacy knowledge
      enhancedMode?: boolean; // Enable advanced prompting features
      enableMemoryIntegration?: boolean; // Enable memory entity storage
      contentType?: 'code' | 'documentation' | 'configuration' | 'logs' | 'general';
    }): Promise<any> {
      const {
        content,
        detectedItems,
        maskingStrategy = 'full',
        // enableMemoryIntegration and contentType can be used for future enhancements
      } = args;
    
      try {
        const { generateMaskingInstructions } = await import('../utils/content-masking.js');
    
        if (!content || content.trim().length === 0) {
          throw new McpAdrError('Content is required for masking', 'INVALID_INPUT');
        }
    
        if (!detectedItems || detectedItems.length === 0) {
          return {
            content: [
              {
                type: 'text',
                text: 'No sensitive items detected. Content does not require masking.',
              },
            ],
          };
        }
    
        // Convert to SensitiveItem format
        const sensitiveItems = detectedItems.map(item => ({
          type: item.type,
          category: item.category || 'unknown',
          content: item.content,
          startPosition: item.startPosition,
          endPosition: item.endPosition,
          confidence: item.confidence || 0.8,
          reasoning: item.reasoning || 'Detected by user input',
          severity: item.severity as 'low' | 'medium' | 'high' | 'critical',
          suggestedMask: item.suggestedMask || '[REDACTED]',
        }));
    
        const result = await generateMaskingInstructions(content, sensitiveItems, maskingStrategy);
    
        // Execute the content masking with AI if enabled, otherwise return prompt
        const { executePromptWithFallback, formatMCPResponse } =
          await import('../utils/prompt-execution.js');
        const executionResult = await executePromptWithFallback(
          result.maskingPrompt,
          result.instructions,
          {
            temperature: 0.1,
            maxTokens: 4000,
            systemPrompt: `You are a cybersecurity expert specializing in intelligent content masking.
    Apply appropriate masking to sensitive content while preserving functionality and readability.
    Focus on balancing security with usability, maintaining context where possible.
    Provide detailed explanations for masking decisions and security recommendations.`,
            responseFormat: 'text',
          }
        );
    
        if (executionResult.isAIGenerated) {
          // AI execution successful - return actual content masking results
          return formatMCPResponse({
            ...executionResult,
            content: `# Content Masking Results
    
    ## Masking Information
    - **Content Length**: ${content.length} characters
    - **Detected Items**: ${detectedItems.length} sensitive items
    - **Masking Strategy**: ${maskingStrategy}
    
    ## AI Content Masking Results
    
    ${executionResult.content}
    
    ## Next Steps
    
    Based on the masking results:
    
    1. **Review Masked Content**: Examine the masked content for accuracy and completeness
    2. **Validate Functionality**: Ensure masked content still functions as intended
    3. **Apply to Production**: Use the masked content in documentation or sharing
    4. **Update Security Policies**: Incorporate findings into security guidelines
    5. **Monitor for Similar Patterns**: Set up detection for similar sensitive content
    
    ## Security Benefits
    
    The applied masking provides:
    - **Data Protection**: Sensitive information is properly redacted
    - **Context Preservation**: Enough context remains for understanding
    - **Consistent Approach**: Uniform masking patterns across content
    - **Compliance Support**: Helps meet data protection requirements
    - **Usability Balance**: Security without sacrificing functionality
    `,
          });
        } else {
          // Fallback to prompt-only mode
          return {
            content: [
              {
                type: 'text',
                text: `# Content Masking Instructions\n\n${result.instructions}\n\n## AI Masking Prompt\n\n${result.maskingPrompt}`,
              },
            ],
          };
        }
      } catch (error) {
        throw new McpAdrError(
          `Failed to generate masking instructions: ${error instanceof Error ? error.message : String(error)}`,
          'MASKING_ERROR'
        );
      }
    }
  • TypeScript interface definitions for GenerateContentMaskingArgs and supporting DetectedItem type, defining the input schema for the tool.
    export interface DetectedItem {
      type: string;
      category?: string;
      content: string;
      startPosition: number;
      endPosition: number;
      confidence?: number;
      reasoning?: string;
      severity: string;
      suggestedMask?: string;
    }
    
    export interface GenerateContentMaskingArgs {
      content: string;
      detectedItems: DetectedItem[];
      contentType?: 'code' | 'documentation' | 'configuration' | 'general';
    }
  • Tool registration entry in the server context generator's tool list, documenting the generate_content_masking tool.
    name: 'generate_content_masking',
    description: 'Generate content masking for detected sensitive information',
  • The handler imports and calls generateMaskingInstructions from ../utils/content-masking.js, which provides supporting logic for masking.
        const { generateMaskingInstructions } = await import('../utils/content-masking.js');
    
        if (!content || content.trim().length === 0) {
          throw new McpAdrError('Content is required for masking', 'INVALID_INPUT');
        }
    
        if (!detectedItems || detectedItems.length === 0) {
          return {
            content: [
              {
                type: 'text',
                text: 'No sensitive items detected. Content does not require masking.',
              },
            ],
          };
        }
    
        // Convert to SensitiveItem format
        const sensitiveItems = detectedItems.map(item => ({
          type: item.type,
          category: item.category || 'unknown',
          content: item.content,
          startPosition: item.startPosition,
          endPosition: item.endPosition,
          confidence: item.confidence || 0.8,
          reasoning: item.reasoning || 'Detected by user input',
          severity: item.severity as 'low' | 'medium' | 'high' | 'critical',
          suggestedMask: item.suggestedMask || '[REDACTED]',
        }));
    
        const result = await generateMaskingInstructions(content, sensitiveItems, maskingStrategy);
    
        // Execute the content masking with AI if enabled, otherwise return prompt
        const { executePromptWithFallback, formatMCPResponse } =
          await import('../utils/prompt-execution.js');
        const executionResult = await executePromptWithFallback(
          result.maskingPrompt,
          result.instructions,
          {
            temperature: 0.1,
            maxTokens: 4000,
            systemPrompt: `You are a cybersecurity expert specializing in intelligent content masking.
    Apply appropriate masking to sensitive content while preserving functionality and readability.
    Focus on balancing security with usability, maintaining context where possible.
    Provide detailed explanations for masking decisions and security recommendations.`,
            responseFormat: 'text',
          }
        );
    
        if (executionResult.isAIGenerated) {
          // AI execution successful - return actual content masking results
          return formatMCPResponse({
            ...executionResult,
            content: `# Content Masking Results
    
    ## Masking Information
    - **Content Length**: ${content.length} characters
    - **Detected Items**: ${detectedItems.length} sensitive items
    - **Masking Strategy**: ${maskingStrategy}
    
    ## AI Content Masking Results
    
    ${executionResult.content}
    
    ## Next Steps
    
    Based on the masking results:
    
    1. **Review Masked Content**: Examine the masked content for accuracy and completeness
    2. **Validate Functionality**: Ensure masked content still functions as intended
    3. **Apply to Production**: Use the masked content in documentation or sharing
    4. **Update Security Policies**: Incorporate findings into security guidelines
    5. **Monitor for Similar Patterns**: Set up detection for similar sensitive content
    
    ## Security Benefits
    
    The applied masking provides:
    - **Data Protection**: Sensitive information is properly redacted
    - **Context Preservation**: Enough context remains for understanding
    - **Consistent Approach**: Uniform masking patterns across content
    - **Compliance Support**: Helps meet data protection requirements
    - **Usability Balance**: Security without sacrificing functionality
    `,
          });
        } else {
          // Fallback to prompt-only mode
          return {
            content: [
              {
                type: 'text',
                text: `# Content Masking Instructions\n\n${result.instructions}\n\n## AI Masking Prompt\n\n${result.maskingPrompt}`,
              },
            ],
          };
        }
      } catch (error) {
        throw new McpAdrError(
          `Failed to generate masking instructions: ${error instanceof Error ? error.message : String(error)}`,
          'MASKING_ERROR'
        );
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool generates 'instructions,' implying a read-only or advisory operation rather than direct mutation, but doesn't clarify if these instructions are executable, what format they take (e.g., text, structured data), or any side effects (e.g., logging, rate limits). For a tool with no annotation coverage, this leaves significant behavioral gaps, though it avoids contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence: 'Generate masking instructions for detected sensitive content.' It's front-loaded with the core purpose, uses efficient language, and avoids unnecessary words. Every part of the sentence contributes directly to understanding the tool's function, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, no output schema), the description is insufficiently complete. It doesn't explain what the output looks like (e.g., are the instructions in a specific format?), behavioral traits (e.g., is it idempotent?), or usage context. While the schema covers parameters, the lack of output schema and annotations means the description should do more to fill gaps, which it doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are documented in the schema itself. The description doesn't add any semantic details beyond what the schema provides (e.g., it doesn't explain the relationship between 'content' and 'detectedItems' or elaborate on 'maskingStrategy' options). With high schema coverage, the baseline is 3, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate masking instructions for detected sensitive content.' It specifies the verb ('generate') and resource ('masking instructions'), and while it doesn't explicitly differentiate from siblings like 'apply_basic_content_masking' or 'validate_content_masking', the focus on 'instructions' rather than application or validation provides implicit distinction. However, it lacks explicit sibling comparison, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing detected sensitive items first), compare it to sibling tools like 'apply_basic_content_masking' (which might apply masking directly) or 'validate_content_masking' (which might check masking), or specify contexts where generating instructions is preferred over other actions. This leaves the agent with minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server