Skip to main content
Glama
radireddy

GitHub MCP Server

by radireddy

github.getCommentImpact

Analyze GitHub code review comments to determine if they resulted in subsequent code changes. Measures review effectiveness by examining PR timeline data and providing impact assessments with confidence scores.

Instructions

Analyze whether review comments resulted in subsequent code changes. Examines PR timeline data to determine if commits were made after comments were submitted. Returns impact assessment with confidence scores (0.0-1.0) and evidence (e.g., "Commit abc1234 modified files after comment"). Only includes comments with actual impact (commits found after comment). Includes statistics: totalComments, totalPRsReviewed, totalImpacts. Filters by repository. Use this tool to measure the effectiveness of code reviews.

Example use cases:

  • Measure review impact (how often comments lead to code changes)

  • Assess review quality and influence

  • Track review effectiveness metrics

  • Identify high-impact reviewers

Returns: Object with impacts array (commentId, prId, hadImpact, confidence, evidence) and optional stats object (totalComments, totalPRsReviewed, totalImpacts)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username (case-insensitive, @ prefix optional). Examples: "octocat", "@octocat"
repoYesRepository in owner/repo format. Required - only comments for PRs in this repository will be analyzed. Example: "owner/repo"
fromYesStart timestamp in ISO 8601 format. Example: "2024-01-01T00:00:00Z"
toYesEnd timestamp in ISO 8601 format. Example: "2024-12-31T23:59:59Z"

Implementation Reference

  • Core handler function implementing github.getCommentImpact. Validates parameters, fetches review comments, groups by PR, fetches PR timelines, analyzes impact using analyzeCommentImpact helper, filters for actual impacts, and returns structured response with stats.
    async getCommentImpact(
      username: string,
      repos: string[],
      from?: string,
      to?: string
    ): Promise<CommentImpactResponse> {
      const { normalizedUsername, normalizedRepos, from: validatedFrom, to: validatedTo } =
        this.validateCommonParameters(username, repos, from, to);
    
      // Get all comments as objects (needed for PR IDs and other metadata)
      const allComments = await this.getReviewCommentsAsObjects(normalizedUsername, validatedFrom, validatedTo);
    
      // Filter comments using shared filter method
      const comments = allComments.filter(comment =>
        this.filterReviewComment(comment, normalizedRepos)
      );
    
      // For each comment, fetch PR timeline and analyze impact
      const impacts: CommentImpact[] = [];
    
      // Group comments by PR to minimize API calls
      const commentsByPR = new Map<string, ReviewComment[]>();
      for (const comment of comments) {
        if (!commentsByPR.has(comment.prId)) {
          commentsByPR.set(comment.prId, []);
        }
        commentsByPR.get(comment.prId)!.push(comment);
      }
    
      // Fetch timeline for each PR and analyze
      for (const [prId, prComments] of commentsByPR.entries()) {
        try {
          const response = await this.client.query(QUERIES.PRTimeline, {
            prId,
          });
    
          const timeline = response.data.node;
    
          if (!timeline) {
            console.error(`[getCommentImpact] No timeline data for PR ${prId}`);
            continue;
          }
    
          // Log commit count for debugging
          const commitCount = timeline.commits?.nodes?.length || 0;
          const timelineCommitCount = (timeline.timelineItems?.nodes || []).filter(
            (item: any) => item.__typename === 'PullRequestCommit'
          ).length;
    
          if (commitCount === 0 && timelineCommitCount === 0) {
            console.error(`[getCommentImpact] No commits found in PR ${prId} timeline`);
          }
    
          for (const comment of prComments) {
            const impact = analyzeCommentImpact(comment, timeline);
            // Only include impacts where there was actual impact (commits found after comment)
            if (impact.hadImpact) {
              impacts.push(impact);
            }
          }
        } catch (error: any) {
          // If timeline fetch fails, skip this PR (don't include in results)
          // Only include impacts where we successfully analyzed and found evidence
          console.error(`Failed to fetch timeline for PR ${prId}:`, error.message);
          // Don't add failed fetches to impacts - only include successful analyses with actual impact
        }
      }
    
      // Calculate statistics from existing data (no additional API calls)
      const response: CommentImpactResponse = {
        impacts,
      };
    
      // Only include stats if we have data to report
      if (comments.length > 0) {
        response.stats = {
          totalComments: comments.length,
          totalPRsReviewed: commentsByPR.size,
          totalImpacts: impacts.length,
        };
      }
    
      return response;
    }
  • MCP server request handler registration for 'github.getCommentImpact'. Dispatches to GitHubTools.getCommentImpact and formats response as MCP content.
    case 'github.getCommentImpact': {
      const result = await tools.getCommentImpact(
        args.username as string,
        args.repos as string[],
        args.from as string | undefined,
        args.to as string | undefined
      );
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Tool schema definition including name, description, and input schema for validation. Note potential mismatch: schema expects 'repo' (string) but implementation uses 'repos' (array).
          name: 'github.getCommentImpact',
          description: `Analyze whether review comments resulted in subsequent code changes. Examines PR timeline data to determine if commits were made after comments were submitted. Returns impact assessment with confidence scores (0.0-1.0) and evidence (e.g., "Commit abc1234 modified files after comment"). Only includes comments with actual impact (commits found after comment). Includes statistics: totalComments, totalPRsReviewed, totalImpacts. Filters by repository. Use this tool to measure the effectiveness of code reviews.
    
    Example use cases:
    - Measure review impact (how often comments lead to code changes)
    - Assess review quality and influence
    - Track review effectiveness metrics
    - Identify high-impact reviewers
    
    Returns: Object with impacts array (commentId, prId, hadImpact, confidence, evidence) and optional stats object (totalComments, totalPRsReviewed, totalImpacts)`,
          inputSchema: {
            type: 'object',
            properties: {
              username: {
                type: 'string',
                description: 'GitHub username (case-insensitive, @ prefix optional). Examples: "octocat", "@octocat"',
                examples: ['octocat', '@octocat'],
              },
              repo: {
                type: 'string',
                description: 'Repository in owner/repo format. Required - only comments for PRs in this repository will be analyzed. Example: "owner/repo"',
                examples: ['owner/repo', 'radireddy/AiApps'],
              },
              from: {
                type: 'string',
                description: 'Start timestamp in ISO 8601 format. Example: "2024-01-01T00:00:00Z"',
                examples: ['2024-01-01T00:00:00Z'],
              },
              to: {
                type: 'string',
                description: 'End timestamp in ISO 8601 format. Example: "2024-12-31T23:59:59Z"',
                examples: ['2024-12-31T23:59:59Z'],
              },
            },
            required: ['username', 'repo', 'from', 'to'],
          },
        },
  • Key helper function that analyzes individual comment impact by checking for commits in PR timeline after comment timestamp. Computes confidence and evidence, central to the tool's logic.
    export function analyzeCommentImpact(
      comment: ReviewComment,
      prTimeline: any
    ): CommentImpact {
      const evidence: string[] = [];
      let hadImpact = false;
      let confidence = 0;
    
      if (!prTimeline?.timelineItems?.nodes) {
        // Return hadImpact: false with empty evidence - caller will filter these out
        return {
          commentId: comment.id,
          prId: comment.prId,
          hadImpact: false,
          confidence: 0,
          evidence: [],
        };
      }
    
      const commentTime = new Date(comment.timestamp).getTime();
      
      // Get commits from PR commits (more reliable than timeline items)
      // Commits are ordered chronologically, so we can check if any came after the comment
      const commits = prTimeline.commits?.nodes || [];
      
      // Also check timeline items for commits (fallback)
      const timelineCommits = (prTimeline.timelineItems?.nodes || []).filter(
        (item: any) => item.__typename === 'PullRequestCommit'
      );
    
      // Combine both sources and deduplicate by commit ID
      const allCommits = new Map<string, any>();
      
      for (const commitNode of commits) {
        if (commitNode?.commit?.id) {
          allCommits.set(commitNode.commit.id, commitNode.commit);
        }
      }
      
      for (const commitItem of timelineCommits) {
        if (commitItem?.commit?.id) {
          allCommits.set(commitItem.commit.id, commitItem.commit);
        }
      }
    
      for (const commit of allCommits.values()) {
        // Use committedDate if available (when commit was actually committed/pushed to branch)
        // Otherwise fall back to authoredDate (when commit was originally authored)
        // Note: authoredDate can be earlier than when commit was pushed to PR, so we prefer committedDate
        const commitDateStr = commit.committedDate || commit.authoredDate;
        if (!commitDateStr) continue;
        
        const commitTime = new Date(commitDateStr).getTime();
        
        // Skip commits that were made before or at the same time as the comment
        // We want commits that happened AFTER the comment to show impact
        // Using <= to skip commits at the same timestamp (allowing for small timing differences)
        if (commitTime <= commentTime) {
          continue;
        }
        
        // Found a commit after the comment - this indicates potential impact
    
        // If comment is on a specific file/line, check if that file was modified
        if (comment.filePath) {
          // Check if file was in commit changes
          // Note: GitHub GraphQL doesn't expose file-level commit details easily
          // This is a simplified heuristic - higher confidence if files were changed
          if (commit.changedFiles > 0) {
            evidence.push(
              `Commit ${commit.id.substring(0, 7)} modified files after comment (${commit.changedFiles} files)`
            );
            hadImpact = true;
            confidence = 0.5; // Medium confidence without file-level detail
          }
        } else {
          // General comment - lower confidence
          if (commit.changedFiles > 0) {
            evidence.push(
              `General comment followed by commit ${commit.id.substring(0, 7)}`
            );
            hadImpact = true;
            confidence = 0.3;
          }
        }
      }
    
      // Higher confidence if comment requested changes (would need additional query to verify)
      if (hadImpact && comment.reviewId) {
        confidence = Math.min(confidence + 0.2, 1.0);
      }
    
      // Only return impact if there was actual impact (commits found after comment)
      // Don't include "No commits found" message - just return hadImpact: false
      // The caller will filter these out
      return {
        commentId: comment.id,
        prId: comment.prId,
        hadImpact,
        confidence,
        evidence,
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well at disclosing key behaviors: it explains what the tool examines (PR timeline data), what it returns (impact assessment with confidence scores and evidence), filtering behavior ('Only includes comments with actual impact'), and statistical outputs. It doesn't mention rate limits, authentication requirements, or potential data limitations, but provides substantial behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. The use cases and return format sections are helpful additions, though the 'Returns:' section could be more integrated. Some redundancy exists between the description text and the explicit 'Returns:' statement, but overall it's well-structured with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, no annotations, and no output schema, the description provides good contextual completeness. It explains the tool's purpose, usage context, behavioral characteristics, and return format in detail. The main gap is the lack of output schema, but the description compensates by explicitly describing the return structure. Some edge cases (like what happens with no matching data) aren't addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear parameter documentation. The description doesn't add any meaningful parameter semantics beyond what's already in the schema - it mentions 'Filters by repository' which is already covered in the repo parameter description. With high schema coverage, the baseline of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze whether review comments resulted in subsequent code changes' with specific verbs (analyze, examine, determine) and resources (review comments, PR timeline data, code changes). It distinguishes from siblings like 'github.getReviewComments' (which just retrieves comments) by focusing on impact analysis rather than data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Use this tool to measure the effectiveness of code reviews' with specific use cases listed (measure review impact, assess review quality, track metrics, identify high-impact reviewers). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools for different needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/radireddy/github-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server