Skip to main content
Glama

analyze-quality

Analyzes code quality in repositories by detecting issues based on severity levels and customizable file patterns to identify potential problems.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
repositoryPathYesPath to the repository to analyze
includePathsNoPatterns of files to include
excludePathsNoPatterns of files to exclude
maxIssuesNoMaximum number of issues to report
minSeverityNoMinimum severity level to report

Implementation Reference

  • Registers the 'analyze-quality' tool on the MCP server, defining the input schema using Zod and providing the handler function that delegates to analyzeCodeQuality.
    server.tool( "analyze-quality", { repositoryPath: z.string().describe("Path to the repository to analyze"), includePaths: z.array(z.string()).optional().describe("Patterns of files to include"), excludePaths: z.array(z.string()).optional().describe("Patterns of files to exclude"), maxIssues: z.number().optional().describe("Maximum number of issues to report"), minSeverity: z.enum(["error", "warning", "info"]).optional().describe("Minimum severity level to report") }, async ({ repositoryPath, includePaths, excludePaths, maxIssues, minSeverity }) => { try { console.log(`Analyzing code quality in: ${repositoryPath}`); // Perform the analysis const qualityReport = await analyzeCodeQuality(repositoryPath, { includePaths, excludePaths, maxIssues, minSeverity }); return { content: [{ type: "text", text: JSON.stringify(qualityReport, null, 2) }] }; } catch (error) { return { content: [{ type: "text", text: `Error analyzing code quality: ${(error as Error).message}` }], isError: true }; } } );
  • Zod schema defining inputs for the analyze-quality tool: repository path, file patterns, issue limits, and severity filter.
    { repositoryPath: z.string().describe("Path to the repository to analyze"), includePaths: z.array(z.string()).optional().describe("Patterns of files to include"), excludePaths: z.array(z.string()).optional().describe("Patterns of files to exclude"), maxIssues: z.number().optional().describe("Maximum number of issues to report"), minSeverity: z.enum(["error", "warning", "info"]).optional().describe("Minimum severity level to report") },
  • Handler function for the analyze-quality tool. Logs the analysis, calls analyzeCodeQuality, returns JSON report or error response.
    async ({ repositoryPath, includePaths, excludePaths, maxIssues, minSeverity }) => { try { console.log(`Analyzing code quality in: ${repositoryPath}`); // Perform the analysis const qualityReport = await analyzeCodeQuality(repositoryPath, { includePaths, excludePaths, maxIssues, minSeverity }); return { content: [{ type: "text", text: JSON.stringify(qualityReport, null, 2) }] }; } catch (error) { return { content: [{ type: "text", text: `Error analyzing code quality: ${(error as Error).message}` }], isError: true }; } }
  • Core helper function implementing the code quality analysis logic: file discovery, rule-based issue detection across languages, complexity analysis integration, reporting.
    export async function analyzeCodeQuality( repositoryPath: string, options: { includePaths?: string[]; excludePaths?: string[]; maxIssues?: number; minSeverity?: 'error' | 'warning' | 'info'; } = {} ): Promise<QualityAnalysisResult> { const { includePaths = ['**/*.*'], excludePaths = ['**/node_modules/**', '**/dist/**', '**/build/**', '**/.git/**'], maxIssues = 1000, minSeverity = 'warning' } = options; // Find files to analyze const files = await glob(includePaths, { cwd: repositoryPath, ignore: excludePaths, absolute: false, nodir: true }); // Initialize result const result: QualityAnalysisResult = { issueCount: { errors: 0, warnings: 0, info: 0 }, issues: [], summary: { byFile: {}, byRule: {} }, metadata: { analyzedFiles: files.length, languageBreakdown: {} } }; // Track language breakdown for (const file of files) { const ext = path.extname(file).toLowerCase(); result.metadata.languageBreakdown[ext] = (result.metadata.languageBreakdown[ext] || 0) + 1; } try { // Get code metrics for complexity-based issues const metricsResult = await analyzeCodeMetrics(repositoryPath); // Analyze each file let issueCount = 0; for (const file of files) { if (issueCount >= maxIssues) break; try { const fullPath = path.join(repositoryPath, file); const ext = path.extname(file).toLowerCase(); const content = await fs.readFile(fullPath, 'utf8'); // Apply rules to this file let fileIssues = applyRules(content, file, ext); // Add complexity-based issues by integrating with metrics data const fileMetrics = metricsResult.files?.find(f => f.filePath === file); if (fileMetrics && fileMetrics.cyclomaticComplexity > 10) { fileIssues.push({ type: 'complexity', severity: 'warning', file: file, message: `High cyclomatic complexity: ${fileMetrics.cyclomaticComplexity}`, rule: 'max-complexity' }); } // Filter by severity fileIssues = fileIssues.filter(issue => { if (minSeverity === 'error') return issue.severity === 'error'; if (minSeverity === 'warning') return issue.severity === 'error' || issue.severity === 'warning'; return true; }); // Update summary if (fileIssues.length > 0) { result.summary.byFile[file] = { errors: 0, warnings: 0, info: 0 }; for (const issue of fileIssues) { // Update issue counts using the mapping function const severityKey = getSeverityKey(issue.severity); result.issueCount[severityKey]++; result.summary.byFile[file][severityKey]++; // Update rule summary if (!result.summary.byRule[issue.rule]) { result.summary.byRule[issue.rule] = { errors: 0, warnings: 0, info: 0 }; } result.summary.byRule[issue.rule][severityKey]++; } } // Add issues to result result.issues.push(...fileIssues); issueCount += fileIssues.length; } catch (error) { console.error(`Error analyzing file ${file}:`, error); } } } catch (error) { // Handle the error if code metrics analysis fails console.error('Error getting code metrics:', error); // Continue with just the regular quality analysis } // Sort issues by severity (errors first, then warnings, then info) result.issues.sort((a, b) => { const severityOrder = { error: 0, warning: 1, info: 2 }; return severityOrder[a.severity] - severityOrder[b.severity]; }); // Ensure we don't exceed maxIssues if (result.issues.length > maxIssues) { result.issues = result.issues.slice(0, maxIssues); } return result; }
  • Registry of quality rules applied during analysis, including rules for console statements, line length, empty catches, TODO comments supporting multiple languages.
    const ruleRegistry: QualityRule[] = [ // JavaScript/TypeScript rules { id: 'no-console', name: 'No Console Statements', description: 'Avoid console statements in production code', languages: ['js', 'jsx', 'ts', 'tsx'], severity: 'warning', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); lines.forEach((line, i) => { if (/console\.(log|warn|error|info|debug)\(/.test(line)) { issues.push({ type: 'quality', severity: 'warning', file: filePath, line: i + 1, message: 'Console statement should be removed in production code', rule: 'no-console', context: line.trim() }); } }); return issues; } }, { id: 'max-line-length', name: 'Maximum Line Length', description: 'Lines should not exceed 100 characters', languages: ['js', 'jsx', 'ts', 'tsx', 'py', 'java', 'go', 'rb'], severity: 'info', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); lines.forEach((line, i) => { if (line.length > 100) { issues.push({ type: 'style', severity: 'info', file: filePath, line: i + 1, message: 'Line exceeds 100 characters', rule: 'max-line-length' }); } }); return issues; } }, { id: 'no-empty-catch', name: 'No Empty Catch Blocks', description: 'Catch blocks should not be empty', languages: ['js', 'jsx', 'ts', 'tsx', 'java'], severity: 'warning', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); for (let i = 0; i < lines.length; i++) { if (/catch\s*\([^)]*\)\s*{/.test(lines[i])) { // Look for empty catch block let j = i + 1; let isEmpty = true; while (j < lines.length && !lines[j].includes('}')) { const trimmed = lines[j].trim(); if (trimmed !== '' && !trimmed.startsWith('//')) { isEmpty = false; break; } j++; } if (isEmpty) { issues.push({ type: 'error-handling', severity: 'warning', file: filePath, line: i + 1, message: 'Empty catch block', rule: 'no-empty-catch', context: lines[i].trim() }); } } } return issues; } }, // Generic rules for all languages { id: 'no-todo-comments', name: 'No TODO Comments', description: 'TODO comments should be addressed', languages: ['*'], severity: 'info', analyze: (content, filePath) => { const issues: QualityIssue[] = []; const lines = content.split('\n'); lines.forEach((line, i) => { if (/(?:\/\/|\/\*|#|<!--)\s*(?:TODO|FIXME|XXX)/.test(line)) { issues.push({ type: 'documentation', severity: 'info', file: filePath, line: i + 1, message: 'TODO comment found', rule: 'no-todo-comments', context: line.trim() }); } }); return issues; } } ];

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/0xjcf/MCP_CodeAnalysis'

If you have feedback or need assistance with the MCP directory API, please join our Discord server