Skip to main content
Glama

analyze_document_quality

Analyze documentation quality by checking for duplicates, relevance, and completeness using AI-powered insights to improve content accuracy.

Instructions

Perform comprehensive quality analysis on documentation with AI insights

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
documentPathYesPath to the document to analyze
includeAINoInclude AI-powered analysis
analysisTypesNoTypes of analysis to perform

Implementation Reference

  • The main execution handler for the analyze_document_quality tool. It checks document existence, performs AI analyses based on specified types (quality, duplicate, completeness), computes basic metrics, and compiles a quality report with scores and recommendations.
    tools.set('analyze_document_quality', async (args: any) => {
      try {
        logger.info(`Analyzing document quality: ${args.documentPath}`);
        
        // Check if document exists
        const documentExists = await checkDocumentExists(args.documentPath);
        if (!documentExists) {
          throw new Error(`Document not found: ${args.documentPath}`);
        }
        
        const analyses: any[] = [];
        const qualityReport: string[] = [];
        
        // Perform requested analysis types
        if (args.includeAI && aiService) {
          for (const analysisType of args.analysisTypes) {
            let analysis;
            
            switch (analysisType) {
              case 'quality':
                analysis = await aiService.analyzeQuality(args.documentPath);
                break;
              case 'duplicate':
                analysis = await aiService.detectDuplicates(args.documentPath);
                break;
              case 'completeness':
                const recentWork = await getRecentWorkContext(args.documentPath, connectionService);
                analysis = await aiService.calculateRelevance(args.documentPath, recentWork);
                break;
              default:
                continue;
            }
            
            analyses.push(analysis);
            qualityReport.push(`${analysisType.toUpperCase()} (${Math.round(analysis.score * 100)}%): ${analysis.insights.join(', ')}`);
          }
        }
        
        // Generate basic quality metrics
        const basicMetrics = await generateBasicQualityMetrics(args.documentPath);
        
        return {
          success: true,
          documentPath: args.documentPath,
          overallScore: analyses.length > 0 ? Math.round(analyses.reduce((sum, a) => sum + a.score, 0) / analyses.length * 100) : null,
          basicMetrics,
          qualityReport,
          recommendations: analyses.flatMap(a => a.suggestions),
          analyzedAt: localizationService.getCurrentDateTimeString()
        };
      } catch (error) {
        logger.error('Failed to analyze document quality:', error);
        throw error;
      }
    });
  • Registration of the analyze_document_quality tool in the enhanced tools array, including name, description, and input schema for MCP integration.
    {
      name: 'analyze_document_quality',
      description: 'Perform comprehensive quality analysis on documentation with AI insights',
      inputSchema: {
        type: 'object',
        properties: {
          documentPath: {
            type: 'string',
            description: 'Path to the document to analyze'
          },
          includeAI: {
            type: 'boolean',
            description: 'Include AI-powered analysis',
            default: true
          },
          analysisTypes: {
            type: 'array',
            items: {
              type: 'string',
              enum: ['quality', 'duplicate', 'relevance', 'completeness']
            },
            description: 'Types of analysis to perform',
            default: ['quality']
          }
        },
        required: ['documentPath']
      }
    },
  • Zod TypeScript schema definition for validating inputs to the analyze_document_quality tool.
    export const AnalyzeDocumentQualitySchema = z.object({
      documentPath: z.string(),
      includeAI: z.boolean().default(true),
      analysisTypes: z.array(z.enum(['quality', 'duplicate', 'relevance', 'completeness'])).default(['quality']),
    });
  • Helper function called by the handler to compute basic file metrics (size, lines, words, last modified) for the quality report.
    async function generateBasicQualityMetrics(documentPath: string): Promise<string[]> {
      try {
        const stats = await fs.stat(documentPath);
        const content = await fs.readFile(documentPath, 'utf-8');
        
        return [
          `File Size: ${Math.round(stats.size / 1024)} KB`,
          `Lines: ${content.split('\n').length}`,
          `Words: ${content.split(/\s+/).length}`,
          `Last Modified: ${stats.mtime.toISOString()}`
        ];
      } catch (error) {
        return [`Error reading file: ${error}`];
      }
    }
  • Helper function used by the handler to verify if the target document file exists before analysis.
    async function checkDocumentExists(documentPath: string): Promise<boolean> {
      try {
        await fs.access(documentPath);
        return true;
      } catch {
        return false;
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'perform comprehensive quality analysis' implies a read-only operation, it doesn't specify whether this tool modifies the document, requires specific permissions, has rate limits, or what the output format looks like. The mention of 'AI insights' hints at computational intensity but lacks concrete behavioral details needed for safe invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point with no wasted words. It's front-loaded with the core action and resource, making it easy to parse. Every part of the sentence ('comprehensive quality analysis,' 'documentation,' 'AI insights') contributes to understanding without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a 3-parameter tool with no annotations and no output schema, the description is inadequate. It doesn't explain what the analysis returns, how results are structured, or any behavioral constraints. While the schema covers parameters, the overall context for using this tool—especially alongside siblings like 'docs_validate'—is missing, leaving significant gaps for an agent to operate effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any meaningful parameter semantics beyond what's in the schema—it doesn't explain how 'documentPath' should be formatted, what 'AI-powered analysis' entails, or how 'analysisTypes' interact. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't detract either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Perform comprehensive quality analysis') and the resource ('documentation'), making the purpose understandable. It adds 'with AI insights' which provides additional context about the approach. However, it doesn't explicitly differentiate this from sibling tools like 'docs_validate' or 'generate_documentation_report', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'docs_validate' and 'generate_documentation_report' that might handle similar tasks, there's no indication of when this specific analysis tool is preferred, what prerequisites exist, or any exclusions. This leaves the agent to guess based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Ghostseller/CastPlan_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server