Skip to main content
Glama
radireddy

GitHub MCP Server

by radireddy

github.getReviewComments

Fetch user review comments from GitHub pull requests within a time range to analyze review quality, quantity, and engagement metrics for performance assessment.

Instructions

Fetch all inline and general review comments authored by a user within a time range, filtered by repository. Returns structured JSON with PR-level grouping, total PRs reviewed, total comments, user ID, date range, and for each PR: array of comment bodies and total comments count. Automatically filters out auto-generated comments and comments on auto-created PRs. All comment bodies are properly JSON-escaped. Use this tool to analyze review comment quality and quantity.

Example use cases:

  • Count review comments to assess review thoroughness

  • Analyze comment patterns across different PRs

  • Track review engagement metrics

  • Extract review feedback for analysis

Returns: Object with userId, dateRange, totalPRsReviewed, totalComments, and array of PR objects (each with prId, prNumber, prTitle, prRepo, prUrl, prCreatedAt, comments array, totalComments)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username (case-insensitive, @ prefix optional). Examples: "octocat", "@octocat"
repoYesRepository in owner/repo format. Required - only comments for PRs in this repository will be returned. Example: "owner/repo"
fromYesStart timestamp in ISO 8601 format. Example: "2024-01-01T00:00:00Z"
toYesEnd timestamp in ISO 8601 format. Example: "2024-12-31T23:59:59Z"

Implementation Reference

  • Primary handler implementation for github.getReviewComments tool. Validates parameters, fetches raw review comments, applies filters (repository matching, auto-generated content exclusion), groups comments by PR, computes totals, and structures the response.
    async getReviewComments(
      username: string,
      repos: string[],
      from?: string,
      to?: string
    ): Promise<ReviewCommentsResponse> {
      const { normalizedUsername, normalizedRepos, from: validatedFrom, to: validatedTo } =
        this.validateCommonParameters(username, repos, from, to);
      const allComments = await this.getReviewCommentsAsObjects(normalizedUsername, validatedFrom, validatedTo);
    
      // Group comments by PR
      const prsMap = new Map<string, PRReviewComments>();
    
      for (const comment of allComments) {
        // Use shared filter method
        if (!this.filterReviewComment(comment, normalizedRepos)) {
          continue;
        }
    
        const prKey = comment.prId;
    
        if (!prsMap.has(prKey)) {
          prsMap.set(prKey, {
            prId: comment.prId,
            prNumber: comment.prNumber,
            prTitle: comment.prTitle,
            prRepo: comment.prRepo,
            prUrl: comment.prUrl || '',
            prCreatedAt: comment.prCreatedAt || '',
            comments: [],
            totalComments: 0,
          });
        }
    
        const prData = prsMap.get(prKey)!;
        // Store comment body as-is (JSON.stringify will handle escaping when serializing)
        prData.comments.push(comment.body);
        prData.totalComments = prData.comments.length;
      }
    
      // Convert map to array (already filtered for auto PRs)
      const prs = Array.from(prsMap.values());
    
      // Calculate totals - count only comments that passed all filters
      const totalPRsReviewed = prs.length;
      const totalComments = prs.reduce((sum, pr) => sum + pr.totalComments, 0);
    
      const response: ReviewCommentsResponse = {
        userId: normalizedUsername,
        dateRange: {
          from: validatedFrom,
          to: validatedTo,
        },
        totalPRsReviewed,
        totalComments,
        prs,
      };
    
      // Return the response - JSON.stringify in server.ts will properly escape all content
      return response;
    }
  • Input schema and description definition for the github.getReviewComments tool within getToolDefinitions().
          name: 'github.getReviewComments',
          description: `Fetch all inline and general review comments authored by a user within a time range, filtered by repository. Returns structured JSON with PR-level grouping, total PRs reviewed, total comments, user ID, date range, and for each PR: array of comment bodies and total comments count. Automatically filters out auto-generated comments and comments on auto-created PRs. All comment bodies are properly JSON-escaped. Use this tool to analyze review comment quality and quantity.
    
    Example use cases:
    - Count review comments to assess review thoroughness
    - Analyze comment patterns across different PRs
    - Track review engagement metrics
    - Extract review feedback for analysis
    
    Returns: Object with userId, dateRange, totalPRsReviewed, totalComments, and array of PR objects (each with prId, prNumber, prTitle, prRepo, prUrl, prCreatedAt, comments array, totalComments)`,
          inputSchema: {
            type: 'object',
            properties: {
              username: {
                type: 'string',
                description: 'GitHub username (case-insensitive, @ prefix optional). Examples: "octocat", "@octocat"',
                examples: ['octocat', '@octocat'],
              },
              repo: {
                type: 'string',
                description: 'Repository in owner/repo format. Required - only comments for PRs in this repository will be returned. Example: "owner/repo"',
                examples: ['owner/repo', 'radireddy/AiApps'],
              },
              from: {
                type: 'string',
                description: 'Start timestamp in ISO 8601 format. Example: "2024-01-01T00:00:00Z"',
                examples: ['2024-01-01T00:00:00Z'],
              },
              to: {
                type: 'string',
                description: 'End timestamp in ISO 8601 format. Example: "2024-12-31T23:59:59Z"',
                examples: ['2024-12-31T23:59:59Z'],
              },
            },
            required: ['username', 'repo', 'from', 'to'],
          },
        },
  • Registration and dispatching logic in the MCP server's CallToolRequestSchema handler switch statement, which calls the tool implementation and formats the response.
    case 'github.getReviewComments': {
      const result = await tools.getReviewComments(
        args.username as string,
        args.repos as string[],
        args.from as string | undefined,
        args.to as string | undefined
      );
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Registration of the ListToolsRequestSchema handler which exposes the tool definitions including github.getReviewComments via getToolDefinitions().
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return {
        tools: getToolDefinitions(),
      };
    });
  • Helper method that performs the core data fetching of review comments using GitHub GraphQL API (QUERIES.ReviewComments), initial filtering, and pagination.
    private async getReviewCommentsAsObjects(
      username: string,
      from: string,
      to: string
    ): Promise<ReviewComment[]> {
      const normalizedUsername = this.normalizeUsername(username);
      validateTimeRange(from, to);
    
      const allComments = await fetchAllPages(
        async (cursor: string | null) => {
          const response = await this.client.query(QUERIES.ReviewComments, {
            username: normalizedUsername,
            from: validateTimestamp(from),
            to: validateTimestamp(to),
            after: cursor,
          });
    
          return response.data;
        },
        (data: any) => {
          const contributions = data.user?.contributionsCollection
            ?.pullRequestReviewContributions?.nodes || [];
          const comments: ReviewComment[] = [];
          for (const contribution of contributions) {
            const review = contribution.pullRequestReview;
            const pr = review.pullRequest;
    
            // Filter out auto-created PRs
            if (this.isAutoCreatedPR(pr.title)) {
              continue;
            }
    
            const reviewComments = mapReviewComments(contribution);
            // Filter out auto-generated comments
            for (const comment of reviewComments) {
              if (!this.isAutoGeneratedComment(comment.body)) {
                comments.push(comment);
              }
            }
          }
          return comments;
        },
        (data: any) => {
          const pageInfo = data.user?.contributionsCollection
            ?.pullRequestReviewContributions?.pageInfo || {};
          return extractPageInfo(pageInfo);
        }
      );
    
      return allComments;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and does this well. It describes important behavioral traits beyond basic functionality: automatic filtering of auto-generated comments and comments on auto-created PRs, JSON-escaping of comment bodies, and the structured grouping of results. It doesn't mention rate limits, authentication requirements, or pagination behavior, preventing a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core functionality in the first sentence. The example use cases and return format details are useful additions, though the return format section could be slightly more concise. Every sentence adds value, with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description provides substantial context: clear purpose, usage guidance, behavioral details, and a detailed explanation of the return structure. It covers the complexity of filtering, processing, and formatting results well. The main gap is the lack of explicit mention of authentication or rate limiting considerations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all four parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, but it does reinforce the repository filtering requirement in the purpose statement. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetch', 'filtered by', 'returns') and resources ('inline and general review comments', 'user', 'time range', 'repository'). It distinguishes from siblings like github.getUserComments by specifying it's for review comments only, not all comments, and from github.getPRReviews by focusing on comments rather than review statuses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to analyze review comment quality and quantity') and includes example use cases that illustrate appropriate scenarios. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, which would be needed for a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/radireddy/github-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server