Skip to main content
Glama

analyze-quality

Analyze code quality by specifying repository paths, inclusion/exclusion patterns, severity levels, and issue limits to identify and report coding issues effectively.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
excludePathsNoPatterns of files to exclude
includePathsNoPatterns of files to include
maxIssuesNoMaximum number of issues to report
minSeverityNoMinimum severity level to report
repositoryPathYesPath to the repository to analyze

Implementation Reference

  • Core handler function that performs the code quality analysis: discovers files via glob, applies registered rules to each file, integrates code metrics for complexity issues, filters by severity/max issues, computes summaries, and returns structured QualityAnalysisResult
    export async function analyzeCodeQuality( repositoryPath: string, options: { includePaths?: string[]; excludePaths?: string[]; maxIssues?: number; minSeverity?: 'error' | 'warning' | 'info'; } = {} ): Promise<QualityAnalysisResult> { const { includePaths = ['**/*.*'], excludePaths = ['**/node_modules/**', '**/dist/**', '**/build/**', '**/.git/**'], maxIssues = 1000, minSeverity = 'warning' } = options; // Find files to analyze const files = await glob(includePaths, { cwd: repositoryPath, ignore: excludePaths, absolute: false, nodir: true }); // Initialize result const result: QualityAnalysisResult = { issueCount: { errors: 0, warnings: 0, info: 0 }, issues: [], summary: { byFile: {}, byRule: {} }, metadata: { analyzedFiles: files.length, languageBreakdown: {} } }; // Track language breakdown for (const file of files) { const ext = path.extname(file).toLowerCase(); result.metadata.languageBreakdown[ext] = (result.metadata.languageBreakdown[ext] || 0) + 1; } try { // Get code metrics for complexity-based issues const metricsResult = await analyzeCodeMetrics(repositoryPath); // Analyze each file let issueCount = 0; for (const file of files) { if (issueCount >= maxIssues) break; try { const fullPath = path.join(repositoryPath, file); const ext = path.extname(file).toLowerCase(); const content = await fs.readFile(fullPath, 'utf8'); // Apply rules to this file let fileIssues = applyRules(content, file, ext); // Add complexity-based issues by integrating with metrics data const fileMetrics = metricsResult.files?.find(f => f.filePath === file); if (fileMetrics && fileMetrics.cyclomaticComplexity > 10) { fileIssues.push({ type: 'complexity', severity: 'warning', file: file, message: `High cyclomatic complexity: ${fileMetrics.cyclomaticComplexity}`, rule: 'max-complexity' }); } // Filter by severity fileIssues = fileIssues.filter(issue => { if (minSeverity === 'error') return issue.severity === 'error'; if (minSeverity === 'warning') return issue.severity === 'error' || issue.severity === 'warning'; return true; }); // Update summary if (fileIssues.length > 0) { result.summary.byFile[file] = { errors: 0, warnings: 0, info: 0 }; for (const issue of fileIssues) { // Update issue counts using the mapping function const severityKey = getSeverityKey(issue.severity); result.issueCount[severityKey]++; result.summary.byFile[file][severityKey]++; // Update rule summary if (!result.summary.byRule[issue.rule]) { result.summary.byRule[issue.rule] = { errors: 0, warnings: 0, info: 0 }; } result.summary.byRule[issue.rule][severityKey]++; } } // Add issues to result result.issues.push(...fileIssues); issueCount += fileIssues.length; } catch (error) { console.error(`Error analyzing file ${file}:`, error); } } } catch (error) { // Handle the error if code metrics analysis fails console.error('Error getting code metrics:', error); // Continue with just the regular quality analysis } // Sort issues by severity (errors first, then warnings, then info) result.issues.sort((a, b) => { const severityOrder = { error: 0, warning: 1, info: 2 }; return severityOrder[a.severity] - severityOrder[b.severity]; }); // Ensure we don't exceed maxIssues if (result.issues.length > maxIssues) { result.issues = result.issues.slice(0, maxIssues); } return result; }
  • Zod input schema defining parameters for the analyze-quality tool: repository path, file patterns, issue limits, and severity filter
    { repositoryPath: z.string().describe("Path to the repository to analyze"), includePaths: z.array(z.string()).optional().describe("Patterns of files to include"), excludePaths: z.array(z.string()).optional().describe("Patterns of files to exclude"), maxIssues: z.number().optional().describe("Maximum number of issues to report"), minSeverity: z.enum(["error", "warning", "info"]).optional().describe("Minimum severity level to report") },
  • Registers the analyze-quality tool on the MCP server with schema and thin wrapper handler that calls the core analyzeCodeQuality function and formats the result as MCP content
    server.tool( "analyze-quality", { repositoryPath: z.string().describe("Path to the repository to analyze"), includePaths: z.array(z.string()).optional().describe("Patterns of files to include"), excludePaths: z.array(z.string()).optional().describe("Patterns of files to exclude"), maxIssues: z.number().optional().describe("Maximum number of issues to report"), minSeverity: z.enum(["error", "warning", "info"]).optional().describe("Minimum severity level to report") }, async ({ repositoryPath, includePaths, excludePaths, maxIssues, minSeverity }) => { try { console.log(`Analyzing code quality in: ${repositoryPath}`); // Perform the analysis const qualityReport = await analyzeCodeQuality(repositoryPath, { includePaths, excludePaths, maxIssues, minSeverity }); return { content: [{ type: "text", text: JSON.stringify(qualityReport, null, 2) }] }; } catch (error) { return { content: [{ type: "text", text: `Error analyzing code quality: ${(error as Error).message}` }], isError: true }; } } );
  • Rule registry defining extensible quality rules (no-console, max-line-length, no-empty-catch, no-todo-comments) applied during analysis
    const ruleRegistry: QualityRule[] = [ // JavaScript/TypeScript rules { id: 'no-console', name: 'No Console Statements', description: 'Avoid console statements in production code', languages: ['js', 'jsx', 'ts', 'tsx'], severity: 'warning', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); lines.forEach((line, i) => { if (/console\.(log|warn|error|info|debug)\(/.test(line)) { issues.push({ type: 'quality', severity: 'warning', file: filePath, line: i + 1, message: 'Console statement should be removed in production code', rule: 'no-console', context: line.trim() }); } }); return issues; } }, { id: 'max-line-length', name: 'Maximum Line Length', description: 'Lines should not exceed 100 characters', languages: ['js', 'jsx', 'ts', 'tsx', 'py', 'java', 'go', 'rb'], severity: 'info', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); lines.forEach((line, i) => { if (line.length > 100) { issues.push({ type: 'style', severity: 'info', file: filePath, line: i + 1, message: 'Line exceeds 100 characters', rule: 'max-line-length' }); } }); return issues; } }, { id: 'no-empty-catch', name: 'No Empty Catch Blocks', description: 'Catch blocks should not be empty', languages: ['js', 'jsx', 'ts', 'tsx', 'java'], severity: 'warning', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); for (let i = 0; i < lines.length; i++) { if (/catch\s*\([^)]*\)\s*{/.test(lines[i])) { // Look for empty catch block let j = i + 1; let isEmpty = true; while (j < lines.length && !lines[j].includes('}')) { const trimmed = lines[j].trim(); if (trimmed !== '' && !trimmed.startsWith('//')) { isEmpty = false; break; } j++; } if (isEmpty) { issues.push({ type: 'error-handling', severity: 'warning', file: filePath, line: i + 1, message: 'Empty catch block', rule: 'no-empty-catch', context: lines[i].trim() }); } } } return issues; } }, // Generic rules for all languages { id: 'no-todo-comments', name: 'No TODO Comments', description: 'TODO comments should be addressed', languages: ['*'], severity: 'info', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); lines.forEach((line, i) => { if (/(?:\/\/|\/\*|#|<!--)\s*(?:TODO|FIXME|XXX)/.test(line)) { issues.push({ type: 'documentation', severity: 'info', file: filePath, line: i + 1, message: 'TODO comment found', rule: 'no-todo-comments', context: line.trim() }); } }); return issues; } } ];
  • Helper function that selects applicable rules based on file extension and applies them to file content to generate QualityIssue[]
    function applyRules(content: string, filePath: string, ext: string): QualityIssue[] { const issues: QualityIssue[] = []; // Get applicable rules for this file type const applicableRules = ruleRegistry.filter(rule => rule.languages.includes('*') || rule.languages.includes(ext.replace('.', '')) ); // Apply each rule for (const rule of applicableRules) { const ruleIssues = rule.analyze(content, filePath); issues.push(...ruleIssues); } return issues; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/0xjcf/MCP_CodeAnalysis'

If you have feedback or need assistance with the MCP directory API, please join our Discord server