Skip to main content
Glama
radireddy

GitHub MCP Server

by radireddy

github.getPRReviews

Fetch GitHub pull request reviews by user within a time range to assess code review participation, analyze review patterns, and track review activity for performance evaluation.

Instructions

Fetch all PR reviews submitted by a user within a time range, filtered by repository. Returns review state (APPROVED, CHANGES_REQUESTED, COMMENTED), PR details, and submission timestamps. Automatically filters out reviews on auto-generated PRs. Use this tool to assess code review participation and review quality.

Example use cases:

  • Measure code review engagement (how many PRs reviewed)

  • Analyze review patterns (approval vs. change requests)

  • Track review activity over time

  • Assess code review contribution for performance evaluations

Returns: Array of review objects with id, state, prId, prNumber, prTitle, prRepo, submittedAt

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username (case-insensitive, @ prefix optional, required). Examples: "octocat", "@octocat"
reposYesArray of repositories in owner/repo format (required, at least one). Only reviews for these repositories will be returned. Example: ["owner/repo1", "owner/repo2"]
fromNoOptional: Start timestamp in ISO 8601 format. If omitted, uses last 3 months. Example: "2024-01-01T00:00:00Z"
toNoOptional: End timestamp in ISO 8601 format. If omitted, uses last 3 months. Example: "2024-12-31T23:59:59Z"

Implementation Reference

  • Core handler implementation: validates parameters, fetches user's pull request review contributions via GitHub GraphQL API, maps to PRReview objects, filters by repository and excludes auto-generated PRs, paginates results, returns structured array of reviews.
    async getPRReviews(
      username: string,
      repos: string[],
      from?: string,
      to?: string
    ): Promise<{ reviews: PRReview[] }> {
      const { normalizedUsername, normalizedRepos, from: validatedFrom, to: validatedTo } =
        this.validateCommonParameters(username, repos, from, to);
    
      const allReviews = await fetchAllPages(
        async (cursor: string | null) => {
          const response = await this.client.query(QUERIES.PRReviews, {
            username: normalizedUsername,
            from: validateTimestamp(validatedFrom),
            to: validateTimestamp(validatedTo),
            after: cursor,
          });
    
          return response.data;
        },
        (data: any) => {
          const contributions = data.user?.contributionsCollection
            ?.pullRequestReviewContributions?.nodes || [];
          // Filter reviews using shared filter method
          return contributions
            .map(mapPRReview)
            .filter((review: PRReview) => this.filterPRReview(review, normalizedRepos)) as PRReview[];
        },
        (data: any) => {
          const pageInfo = data.user?.contributionsCollection
            ?.pullRequestReviewContributions?.pageInfo || {};
          return extractPageInfo(pageInfo);
        }
      );
    
      return { reviews: allReviews as PRReview[] };
    }
  • Tool schema definition including name, detailed description, input schema with properties (username, repos, from, to), examples, and required fields.
        {
          name: 'github.getPRReviews',
          description: `Fetch all PR reviews submitted by a user within a time range, filtered by repository. Returns review state (APPROVED, CHANGES_REQUESTED, COMMENTED), PR details, and submission timestamps. Automatically filters out reviews on auto-generated PRs. Use this tool to assess code review participation and review quality.
    
    Example use cases:
    - Measure code review engagement (how many PRs reviewed)
    - Analyze review patterns (approval vs. change requests)
    - Track review activity over time
    - Assess code review contribution for performance evaluations
    
    Returns: Array of review objects with id, state, prId, prNumber, prTitle, prRepo, submittedAt`,
          inputSchema: {
            type: 'object',
            properties: {
              username: {
                type: 'string',
                description: 'GitHub username (case-insensitive, @ prefix optional, required). Examples: "octocat", "@octocat"',
                examples: ['octocat', '@octocat'],
              },
              repos: {
                type: 'array',
                items: { type: 'string' },
                description: 'Array of repositories in owner/repo format (required, at least one). Only reviews for these repositories will be returned. Example: ["owner/repo1", "owner/repo2"]',
                examples: [['owner/repo'], ['owner/repo1', 'owner/repo2']],
              },
              from: {
                type: 'string',
                description: 'Optional: Start timestamp in ISO 8601 format. If omitted, uses last 3 months. Example: "2024-01-01T00:00:00Z"',
                examples: ['2024-01-01T00:00:00Z'],
              },
              to: {
                type: 'string',
                description: 'Optional: End timestamp in ISO 8601 format. If omitted, uses last 3 months. Example: "2024-12-31T23:59:59Z"',
                examples: ['2024-12-31T23:59:59Z'],
              },
            },
            required: ['username', 'repos'],
          },
  • Tool dispatch registration in MCP server request handler: calls GitHubTools.getPRReviews with parsed arguments and formats result as MCP text content response.
    case 'github.getPRReviews': {
      const result = await tools.getPRReviews(
        args.username as string,
        args.repos as string[],
        args.from as string | undefined,
        args.to as string | undefined
      );
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Helper function used by getPRReviews to filter reviews: excludes reviews on auto-created PRs (e.g., backmerges) and ensures repository match.
    private filterPRReview(review: PRReview, normalizedRepos: string[]): boolean {
      // Filter out reviews on auto-created PRs
      if (this.isAutoCreatedPR(review.prTitle)) {
        return false;
      }
    
      // Filter by repository (must match one of the provided repos)
      const reviewRepo = review.prRepo?.toLowerCase();
      if (!reviewRepo || !normalizedRepos.includes(reviewRepo)) {
        return false;
      }
    
      return true;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it describes the return format (array of review objects with specific fields), filtering behavior (auto-filters out reviews on auto-generated PRs), and time range defaults (last 3 months if omitted). It doesn't mention rate limits or authentication needs, but covers most essential aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core functionality, followed by use cases and return details. The example use cases section is helpful but slightly lengthens the text, though each sentence earns its place by clarifying application scenarios.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and no output schema, the description does well by explaining behavior, filtering, defaults, and return format. It could be more complete by mentioning potential limitations like pagination or error cases, but it covers the essentials for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description adds no additional parameter semantics beyond what's in the schema, maintaining the baseline score of 3 for adequate but no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetch all PR reviews submitted by a user') and resources ('filtered by repository'), and distinguishes it from siblings by focusing on reviews rather than authored PRs, comments, or stats. It explicitly mentions what it returns and what it filters out.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('to assess code review participation and review quality') and includes example use cases, but it doesn't explicitly state when not to use it or name alternatives among sibling tools like github.getReviewComments or github.getUserComments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/radireddy/github-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server