Skip to main content
Glama

generate_content_masking

Mask sensitive content in text or code by generating custom masking instructions. Specify content, detected items, and masking strategy to ensure data protection and compliance.

Instructions

Generate masking instructions for detected sensitive content

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesContent to mask
detectedItemsYesDetected sensitive items to mask
maskingStrategyNoStrategy for masking contentfull

Implementation Reference

  • The core handler function for the 'generate_content_masking' tool. It processes input content and detected sensitive items, generates masking prompts, executes them via AI, and returns formatted masked content with validation and next steps.
    export async function generateContentMasking(args: { content: string; detectedItems: Array<{ type: string; category?: string; content: string; startPosition: number; endPosition: number; confidence?: number; reasoning?: string; severity: string; suggestedMask?: string; }>; maskingStrategy?: 'full' | 'partial' | 'placeholder' | 'environment'; knowledgeEnhancement?: boolean; // Enable GKP for security and privacy knowledge enhancedMode?: boolean; // Enable advanced prompting features enableMemoryIntegration?: boolean; // Enable memory entity storage contentType?: 'code' | 'documentation' | 'configuration' | 'logs' | 'general'; }): Promise<any> { const { content, detectedItems, maskingStrategy = 'full', // enableMemoryIntegration and contentType can be used for future enhancements } = args; try { const { generateMaskingInstructions } = await import('../utils/content-masking.js'); if (!content || content.trim().length === 0) { throw new McpAdrError('Content is required for masking', 'INVALID_INPUT'); } if (!detectedItems || detectedItems.length === 0) { return { content: [ { type: 'text', text: 'No sensitive items detected. Content does not require masking.', }, ], }; } // Convert to SensitiveItem format const sensitiveItems = detectedItems.map(item => ({ type: item.type, category: item.category || 'unknown', content: item.content, startPosition: item.startPosition, endPosition: item.endPosition, confidence: item.confidence || 0.8, reasoning: item.reasoning || 'Detected by user input', severity: item.severity as 'low' | 'medium' | 'high' | 'critical', suggestedMask: item.suggestedMask || '[REDACTED]', })); const result = await generateMaskingInstructions(content, sensitiveItems, maskingStrategy); // Execute the content masking with AI if enabled, otherwise return prompt const { executePromptWithFallback, formatMCPResponse } = await import( '../utils/prompt-execution.js' ); const executionResult = await executePromptWithFallback( result.maskingPrompt, result.instructions, { temperature: 0.1, maxTokens: 4000, systemPrompt: `You are a cybersecurity expert specializing in intelligent content masking. Apply appropriate masking to sensitive content while preserving functionality and readability. Focus on balancing security with usability, maintaining context where possible. Provide detailed explanations for masking decisions and security recommendations.`, responseFormat: 'text', } ); if (executionResult.isAIGenerated) { // AI execution successful - return actual content masking results return formatMCPResponse({ ...executionResult, content: `# Content Masking Results ## Masking Information - **Content Length**: ${content.length} characters - **Detected Items**: ${detectedItems.length} sensitive items - **Masking Strategy**: ${maskingStrategy} ## AI Content Masking Results ${executionResult.content} ## Next Steps Based on the masking results: 1. **Review Masked Content**: Examine the masked content for accuracy and completeness 2. **Validate Functionality**: Ensure masked content still functions as intended 3. **Apply to Production**: Use the masked content in documentation or sharing 4. **Update Security Policies**: Incorporate findings into security guidelines 5. **Monitor for Similar Patterns**: Set up detection for similar sensitive content ## Security Benefits The applied masking provides: - **Data Protection**: Sensitive information is properly redacted - **Context Preservation**: Enough context remains for understanding - **Consistent Approach**: Uniform masking patterns across content - **Compliance Support**: Helps meet data protection requirements - **Usability Balance**: Security without sacrificing functionality `, }); } else { // Fallback to prompt-only mode return { content: [ { type: 'text', text: `# Content Masking Instructions\n\n${result.instructions}\n\n## AI Masking Prompt\n\n${result.maskingPrompt}`, }, ], }; } } catch (error) { throw new McpAdrError( `Failed to generate masking instructions: ${error instanceof Error ? error.message : String(error)}`, 'MASKING_ERROR' ); } }
  • TypeScript interfaces defining the input schema for the generate_content_masking tool, including DetectedItem structure and GenerateContentMaskingArgs.
    export interface DetectedItem { type: string; category?: string; content: string; startPosition: number; endPosition: number; confidence?: number; reasoning?: string; severity: string; suggestedMask?: string; } export interface GenerateContentMaskingArgs { content: string; detectedItems: DetectedItem[]; contentType?: 'code' | 'documentation' | 'configuration' | 'general'; }
  • Uses helper function generateMaskingInstructions from src/utils/content-masking.ts to generate the base masking prompt and instructions.
    const { generateMaskingInstructions } = await import('../utils/content-masking.js'); if (!content || content.trim().length === 0) { throw new McpAdrError('Content is required for masking', 'INVALID_INPUT'); } if (!detectedItems || detectedItems.length === 0) { return { content: [ { type: 'text', text: 'No sensitive items detected. Content does not require masking.', }, ], }; } // Convert to SensitiveItem format const sensitiveItems = detectedItems.map(item => ({ type: item.type, category: item.category || 'unknown', content: item.content, startPosition: item.startPosition, endPosition: item.endPosition, confidence: item.confidence || 0.8, reasoning: item.reasoning || 'Detected by user input', severity: item.severity as 'low' | 'medium' | 'high' | 'critical', suggestedMask: item.suggestedMask || '[REDACTED]', })); const result = await generateMaskingInstructions(content, sensitiveItems, maskingStrategy);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server