Skip to main content
Glama
TheAlchemist6

CodeCompass MCP

explain_code

Generate human-readable code explanations and documentation from GitHub repositories. Transform technical analysis into accessible overviews, tutorials, and architectural insights for developers.

Instructions

๐Ÿ“š AI-powered code explanation generating human-readable documentation, tutorials, and architectural insights. Transforms technical analysis into accessible explanations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesGitHub repository URL
file_pathsNoSpecific files to explain (optional - explains key files if not specified)
explanation_typeNoType of explanation to generateoverview
optionsNo

Implementation Reference

  • Main execution handler for 'explain_code' tool. Fetches repo info and files, generates AI explanation using OpenAI service based on type (overview, detailed, architecture, etc.).
    async function handleExplainCode(args: any) {
      try {
        const { url, file_paths, explanation_type = 'overview', options = {} } = args;
        
        // Get repository info and code content
        const repoInfo = await githubService.getRepositoryInfo(url);
        let filesToExplain: Record<string, string> = {};
        
        if (file_paths && file_paths.length > 0) {
          // Get specific files
          for (const filePath of file_paths) {
            try {
              const content = await githubService.getFileContent(url, filePath);
              filesToExplain[filePath] = content;
            } catch (error) {
              // Skip files that can't be fetched
            }
          }
        } else {
          // Use key files from repository
          filesToExplain = repoInfo.keyFiles;
        }
        
        if (Object.keys(filesToExplain).length === 0) {
          throw new Error('No files found to explain');
        }
        
        // Generate AI explanation based on type
        let aiExplanation: string;
        let aiExplanationResult: { content: string; modelUsed: string; warning?: string };
        
        switch (explanation_type) {
          case 'architecture':
            aiExplanation = await openaiService.explainArchitecture(url, repoInfo);
            // For architecture, create a mock result for consistency
            aiExplanationResult = {
              content: aiExplanation,
              modelUsed: options.ai_model || 'anthropic/claude-3.5-sonnet',
              warning: undefined
            };
            break;
          case 'overview':
          case 'detailed':
          case 'tutorial':
          case 'integration':
          default:
            // Create a prompt for the specific explanation type
            const codeContext = Object.entries(filesToExplain)
              .map(([path, content]) => `--- ${path} ---\n${content}`)
              .join('\n\n');
            
            const prompt = `Please provide a ${explanation_type} explanation of this ${repoInfo.language || 'code'} repository:
    
    Repository: ${repoInfo.name}
    Description: ${repoInfo.description || 'No description'}
    Language: ${repoInfo.language || 'Multiple'}
    
    Code:
    ${codeContext}
    
    Please focus on:
    ${options.focus_on_patterns ? '- Design patterns and architecture' : ''}
    ${options.include_examples ? '- Code examples and usage' : ''}
    ${options.include_diagrams ? '- Visual diagrams where helpful' : ''}
    
    Target audience: ${options.target_audience || 'intermediate'}`;
            
            aiExplanationResult = await openaiService.chatWithRepository(url, prompt, undefined, options.ai_model);
            aiExplanation = aiExplanationResult.content;
            break;
        }
        
        const result = {
          repository: {
            name: repoInfo.name,
            description: repoInfo.description,
            language: repoInfo.language,
            owner: repoInfo.owner,
          },
          explanation: {
            type: explanation_type,
            files_analyzed: Object.keys(filesToExplain),
            ai_model_used: aiExplanationResult.modelUsed,
            ai_model_requested: options.ai_model || 'auto',
            target_audience: options.target_audience || 'intermediate',
            content: aiExplanation,
            timestamp: new Date().toISOString(),
            model_warning: aiExplanationResult.warning,
          },
          metadata: {
            file_count: Object.keys(filesToExplain).length,
            total_lines: Object.values(filesToExplain).reduce((sum, content) => sum + content.split('\n').length, 0),
          },
        };
        
        const response = createResponse(result);
        return formatToolResponse(response);
      } catch (error) {
        const response = createResponse(null, error);
        return formatToolResponse(response);
      }
    }
  • Tool definition including name, description, and input schema for validation.
    {
      name: 'explain_code',
      description: '๐Ÿ“š AI-powered code explanation generating human-readable documentation, tutorials, and architectural insights. Transforms technical analysis into accessible explanations.',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'GitHub repository URL',
          },
          file_paths: {
            type: 'array',
            items: { type: 'string' },
            description: 'Specific files to explain (optional - explains key files if not specified)',
          },
          explanation_type: {
            type: 'string',
            enum: ['overview', 'detailed', 'architecture', 'tutorial', 'integration'],
            description: 'Type of explanation to generate',
            default: 'overview',
          },
          options: {
            type: 'object',
            properties: {
              ai_model: {
                type: 'string',
                description: 'AI model to use for explanation (OpenRouter models). Use "auto" for intelligent model selection',
                default: 'auto',
              },
              target_audience: {
                type: 'string',
                enum: ['beginner', 'intermediate', 'advanced'],
                description: 'Target audience for explanation',
                default: 'intermediate',
              },
              include_examples: {
                type: 'boolean',
                description: 'Include code examples in explanations',
                default: true,
              },
              include_diagrams: {
                type: 'boolean',
                description: 'Include ASCII diagrams where helpful',
                default: true,
              },
              focus_on_patterns: {
                type: 'boolean',
                description: 'Focus on design patterns and architecture',
                default: true,
              },
            },
          },
        },
        required: ['url'],
      },
    },
  • src/index.ts:236-240 (registration)
    Registers the consolidatedTools array (including 'explain_code') for the ListToolsRequestHandler.
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return {
        tools: consolidatedTools,
      };
    });
  • src/index.ts:277-279 (registration)
    Dispatch case in CallToolRequestHandler switch statement that routes 'explain_code' calls to the handler function.
    case 'explain_code':
      result = await handleExplainCode(args);
      break;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'AI-powered' and 'transforms technical analysis,' hinting at generative AI behavior, but lacks critical details: whether it makes network calls to external services, potential rate limits, authentication needs, output format (text/markdown?), or error handling. For a tool with AI integration and no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized (two sentences) and front-loaded with the core purpose ('AI-powered code explanation...'). Every sentence adds value: the first defines the tool's function, and the second emphasizes transformation into accessible content. There's zero waste or redundancy, making it highly efficient for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (AI integration, 4 parameters with nested objects) and lack of both annotations and output schema, the description is incomplete. It doesn't address behavioral aspects like AI model usage, output format, or error cases, leaving gaps for the agent. For a tool with these characteristics, the description should provide more context to compensate for missing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 75% (good), so the baseline is 3 even without parameter details in the description. The description adds no specific parameter semantics beyond what's in the schemaโ€”it doesn't clarify URL formats, file_paths selection logic, or explanation_type nuances. However, with high schema coverage, the description isn't expected to compensate heavily, maintaining an adequate baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'AI-powered code explanation generating human-readable documentation, tutorials, and architectural insights' with the verb 'explains' implied. It specifies the resource as 'code' and distinguishes from siblings like analyze_codebase or review_code by focusing on explanation rather than analysis or review. However, it doesn't explicitly differentiate from suggest_improvements which might also involve explanation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through terms like 'human-readable documentation' and 'accessible explanations,' suggesting it's for making code understandable. However, it provides no explicit guidance on when to use this tool versus alternatives like analyze_codebase (for technical analysis) or review_code (for code quality assessment). The agent must infer appropriate usage from the description's focus on explanation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TheAlchemist6/codecompass-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server