Skip to main content
Glama
TheAlchemist6

CodeCompass MCP

review_code

Analyze GitHub repositories for security vulnerabilities, performance issues, and maintainability problems using AI insights and rule-based validation. Provides actionable recommendations with suggested fixes.

Instructions

🔍 Comprehensive code review combining AI insights with rule-based validation. Provides intelligent analysis, security scanning, and actionable recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesGitHub repository URL
file_pathsNoSpecific files to review (optional - reviews key files if not specified)
review_modeNoReview approach: AI-powered, rule-based, or combinedcombined
review_focusNoAreas to focus the review on
optionsNo

Implementation Reference

  • Main handler function that implements the review_code tool. Fetches repository files, prepares context, calls OpenAI service for code review, and formats the response.
    async function handleReviewCode(args: any) {
      try {
        const { url, file_paths, review_focus = ['security', 'performance', 'maintainability'], options = {} } = args;
        
        // Get repository info and code content
        const repoInfo = await githubService.getRepositoryInfo(url);
        let filesToReview: Record<string, string> = {};
        
        if (file_paths && file_paths.length > 0) {
          // Get specific files
          for (const filePath of file_paths) {
            try {
              const content = await githubService.getFileContent(url, filePath);
              filesToReview[filePath] = content;
            } catch (error) {
              // Skip files that can't be fetched
            }
          }
        } else {
          // Use key files from repository
          filesToReview = repoInfo.keyFiles;
        }
        
        if (Object.keys(filesToReview).length === 0) {
          throw new Error('No files found to review');
        }
        
        // Prepare code for AI review
        const codeContext = Object.entries(filesToReview)
          .map(([path, content]) => `--- ${path} ---\n${content}`)
          .join('\n\n');
        
        const focusAreas = review_focus.join(', ');
        
        // Generate AI review with specified model
        const aiReviewResult = await openaiService.generateCodeReview(
          codeContext,
          repoInfo.language || 'javascript',
          review_focus,
          options.ai_model
        );
        
        const result = {
          repository: {
            name: repoInfo.name,
            description: repoInfo.description,
            language: repoInfo.language,
            owner: repoInfo.owner,
          },
          review: {
            files_reviewed: Object.keys(filesToReview),
            focus_areas: review_focus,
            ai_model_used: aiReviewResult.modelUsed,
            ai_model_requested: options.ai_model || 'auto',
            analysis: aiReviewResult.content,
            severity_threshold: options.severity_threshold || 'medium',
            timestamp: new Date().toISOString(),
            model_warning: aiReviewResult.warning,
          },
          recommendations: {
            priority_fixes: [],
            suggestions: [],
            best_practices: [],
          },
        };
        
        const response = createResponse(result);
        return formatToolResponse(response);
      } catch (error) {
        const response = createResponse(null, error);
        return formatToolResponse(response);
      }
    }
  • Tool schema definition including input validation schema for the review_code tool.
    {
      name: 'review_code',
      description: '🔍 Comprehensive code review combining AI insights with rule-based validation. Provides intelligent analysis, security scanning, and actionable recommendations.',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'GitHub repository URL',
          },
          file_paths: {
            type: 'array',
            items: { type: 'string' },
            description: 'Specific files to review (optional - reviews key files if not specified)',
          },
          review_mode: {
            type: 'string',
            enum: ['ai', 'rules', 'combined'],
            description: 'Review approach: AI-powered, rule-based, or combined',
            default: 'combined',
          },
          review_focus: {
            type: 'array',
            items: {
              type: 'string',
              enum: ['security', 'performance', 'maintainability', 'best-practices', 'bugs', 'accessibility'],
            },
            description: 'Areas to focus the review on',
            default: ['security', 'performance', 'maintainability'],
          },
          options: {
            type: 'object',
            properties: {
              ai_model: {
                type: 'string',
                description: 'AI model to use for analysis (OpenRouter models). Use "auto" for intelligent model selection',
                default: 'auto',
              },
              severity_threshold: {
                type: 'string',
                enum: ['low', 'medium', 'high', 'critical'],
                description: 'Minimum severity level to report',
                default: 'medium',
              },
              include_fixes: {
                type: 'boolean',
                description: 'Include suggested fixes',
                default: true,
              },
              include_examples: {
                type: 'boolean',
                description: 'Include code examples in suggestions',
                default: true,
              },
              language_specific: {
                type: 'boolean',
                description: 'Include language-specific best practices',
                default: true,
              },
              framework_specific: {
                type: 'boolean',
                description: 'Include framework-specific checks',
                default: true,
              },
            },
          },
        },
        required: ['url'],
      },
    },
  • src/index.ts:274-275 (registration)
    Registration of the review_code handler in the main tool dispatch switch statement.
    case 'review_code':
      result = await handleReviewCode(args);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions the tool provides 'analysis, security scanning, and actionable recommendations,' it doesn't describe what happens during execution (e.g., whether it modifies code, requires authentication, has rate limits, or returns structured vs. free-text results). For a complex tool with 5 parameters and no output schema, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that packs substantial information about the tool's capabilities. It uses emoji and clear language without unnecessary elaboration. However, it could be more front-loaded with the core purpose rather than starting with an emoji, and it might benefit from slightly more structure given the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 5 parameters, nested objects, no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns, how results are structured, error conditions, or performance characteristics. The description focuses on what the tool does at a high level but lacks the detail needed for an agent to understand the full context of using this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 80%, providing good documentation for most parameters. The description adds minimal value beyond the schema, mentioning 'AI insights' and 'rule-based validation' which loosely relate to the 'review_mode' parameter but doesn't elaborate on parameter interactions or usage patterns. With high schema coverage, the baseline of 3 is appropriate as the description doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'comprehensive code review combining AI insights with rule-based validation' and lists specific outputs like 'intelligent analysis, security scanning, and actionable recommendations.' This provides a specific verb (review) and resource (code) with scope details. However, it doesn't explicitly differentiate from sibling tools like 'analyze_codebase' or 'suggest_improvements,' which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'analyze_codebase' or 'suggest_improvements.' It mentions what the tool does but doesn't specify scenarios where it's preferred over other code analysis tools in the server. There's no mention of prerequisites, limitations, or typical use cases that would help an agent choose appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TheAlchemist6/codecompass-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server