Skip to main content
Glama
radireddy

GitHub MCP Server

by radireddy

github.getUserRepoStats

Retrieve comprehensive GitHub repository statistics for a user within a specified time frame, including PR activity, comments, reviews, and code changes in a single API call.

Instructions

Get comprehensive repository statistics for a user within a time frame. Aggregates all activity metrics in a single call: PRs authored (with state breakdown: merged/open/closed), comments (total with review/issue breakdown), PR reviews (total with state breakdown: approved/changesRequested/commented, plus unique PRs reviewed), and code changes (files changed, additions, deletions, net change). This is the most efficient tool for getting a complete overview of user activity in a repository. Combines data from multiple sources internally.

Example use cases:

  • Get complete activity overview for performance reviews

  • Generate comprehensive developer metrics reports

  • Compare user activity across different repositories

  • Track overall contribution metrics in a single call

Returns: Object with stats containing username, repo, timeRange, prs (count, merged, open, closed), comments (total, review, issue), reviews (total, totalPRsReviewed, approved, changesRequested, commented), codeChanges (filesChanged, additions, deletions, netChange)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username (case-insensitive, @ prefix optional). Examples: "octocat", "@octocat"
repoYesRepository in owner/repo format. Required - statistics will be calculated only for this repository. Example: "owner/repo"
fromYesStart timestamp in ISO 8601 format. Example: "2024-01-01T00:00:00Z"
toYesEnd timestamp in ISO 8601 format. Example: "2024-12-31T23:59:59Z"

Implementation Reference

  • Core handler implementation in GitHubTools class. Aggregates stats by calling other tools (getAuthoredPRs, getUserComments, getPRReviews) and computes PR states, comment types, review states, and code changes.
    async getUserRepoStats(
      username: string,
      repos: string[],
      from?: string,
      to?: string
    ): Promise<{ stats: UserRepoStats }> {
      const { normalizedUsername, normalizedRepos, from: validatedFrom, to: validatedTo } =
        this.validateCommonParameters(username, repos, from, to);
    
      try {
        // Log progress to stderr (server logs)
        console.error(`[getUserRepoStats] Starting for user: ${normalizedUsername}, repos: ${normalizedRepos.join(', ')}`);
    
        // 1. Get authored PRs for all repos
        console.error(`[getUserRepoStats] Step 1/3: Fetching authored PRs...`);
        const { prs } = await this.getAuthoredPRs(normalizedUsername, normalizedRepos, validatedFrom, validatedTo);
        console.error(`[getUserRepoStats] Found ${prs.length} authored PRs`);
    
        // 2. Get all comments for all repos
        console.error(`[getUserRepoStats] Step 2/3: Fetching comments (this may take a while for large repos)...`);
        const { comments } = await this.getUserComments(normalizedUsername, normalizedRepos, validatedFrom, validatedTo);
        console.error(`[getUserRepoStats] Found ${comments.length} comments`);
    
        // 3. Get PR reviews (already filtered by repos)
        console.error(`[getUserRepoStats] Step 3/3: Fetching PR reviews...`);
        const { reviews } = await this.getPRReviews(normalizedUsername, normalizedRepos, validatedFrom, validatedTo);
        console.error(`[getUserRepoStats] Found ${reviews.length} reviews`);
    
        // 4. Aggregate PR statistics
        const prStats = {
          count: prs.length,
          merged: prs.filter(pr => pr.state === 'MERGED').length,
          open: prs.filter(pr => pr.state === 'OPEN').length,
          closed: prs.filter(pr => pr.state === 'CLOSED').length,
        };
    
        // 5. Aggregate comment statistics
        const commentStats = {
          total: comments.length,
          review: comments.filter(c => c.commentType === 'review').length,
          issue: comments.filter(c => c.commentType === 'issue').length,
        };
    
        // 6. Aggregate review statistics
        // Count unique PRs reviewed (a user can review the same PR multiple times)
        const uniquePRsReviewed = new Set(reviews.map(r => r.prId)).size;
        const reviewStats = {
          total: reviews.length,
          totalPRsReviewed: uniquePRsReviewed,
          approved: reviews.filter(r => r.state === 'APPROVED').length,
          changesRequested: reviews.filter(r => r.state === 'CHANGES_REQUESTED').length,
          commented: reviews.filter(r => r.state === 'COMMENTED').length,
        };
    
        // 7. Aggregate code change statistics
        const codeStats = {
          filesChanged: prs.reduce((sum, pr) => sum + pr.filesChanged, 0),
          additions: prs.reduce((sum, pr) => sum + pr.additions, 0),
          deletions: prs.reduce((sum, pr) => sum + pr.deletions, 0),
          netChange: prs.reduce((sum, pr) => sum + pr.additions - pr.deletions, 0),
        };
    
        // For stats, use first repo as primary (or combine if needed)
        // Note: stats are aggregated across all provided repos
        const stats: UserRepoStats = {
          username: normalizedUsername,
          repo: normalizedRepos.length === 1 ? normalizedRepos[0] : normalizedRepos.join(', '),
          timeRange: {
            from: validatedFrom,
            to: validatedTo,
          },
          prs: prStats,
          comments: commentStats,
          reviews: reviewStats,
          codeChanges: codeStats,
        };
    
        console.error(`[getUserRepoStats] Completed successfully`);
        return { stats };
      } catch (error: any) {
        console.error(`[getUserRepoStats] Error: ${error.message}`);
        if (error.stack) {
          console.error(`[getUserRepoStats] Stack: ${error.stack}`);
        }
        throw error;
      }
    }
  • MCP server switch case dispatcher that handles the tool call by delegating to tools.getUserRepoStats and formats the response.
    case 'github.getUserRepoStats': {
      const result = await tools.getUserRepoStats(
        args.username as string,
        args.repos as string[],
        args.from as string | undefined,
        args.to as string | undefined
      );
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Tool schema definition including name, description, inputSchema with parameters (username, repo, from, to), used by listTools handler.
          name: 'github.getUserRepoStats',
          description: `Get comprehensive repository statistics for a user within a time frame. Aggregates all activity metrics in a single call: PRs authored (with state breakdown: merged/open/closed), comments (total with review/issue breakdown), PR reviews (total with state breakdown: approved/changesRequested/commented, plus unique PRs reviewed), and code changes (files changed, additions, deletions, net change). This is the most efficient tool for getting a complete overview of user activity in a repository. Combines data from multiple sources internally.
    
    Example use cases:
    - Get complete activity overview for performance reviews
    - Generate comprehensive developer metrics reports
    - Compare user activity across different repositories
    - Track overall contribution metrics in a single call
    
    Returns: Object with stats containing username, repo, timeRange, prs (count, merged, open, closed), comments (total, review, issue), reviews (total, totalPRsReviewed, approved, changesRequested, commented), codeChanges (filesChanged, additions, deletions, netChange)`,
          inputSchema: {
            type: 'object',
            properties: {
              username: {
                type: 'string',
                description: 'GitHub username (case-insensitive, @ prefix optional). Examples: "octocat", "@octocat"',
                examples: ['octocat', '@octocat'],
              },
              repo: {
                type: 'string',
                description: 'Repository in owner/repo format. Required - statistics will be calculated only for this repository. Example: "owner/repo"',
                examples: ['owner/repo', 'radireddy/AiApps'],
              },
              from: {
                type: 'string',
                description: 'Start timestamp in ISO 8601 format. Example: "2024-01-01T00:00:00Z"',
                examples: ['2024-01-01T00:00:00Z'],
              },
              to: {
                type: 'string',
                description: 'End timestamp in ISO 8601 format. Example: "2024-12-31T23:59:59Z"',
                examples: ['2024-12-31T23:59:59Z'],
              },
            },
            required: ['username', 'repo', 'from', 'to'],
          },
        },
  • Registers the listTools endpoint which returns all tool definitions including github.getUserRepoStats schema via getToolDefinitions().
    server.setRequestHandler(ListToolsRequestSchema, async () => {
      return {
        tools: getToolDefinitions(),
      };
    });
  • Shared validation helper used by getUserRepoStats for input parameters (username, repos, time range).
    private validateCommonParameters(
      username: string,
      repos: string[],
      from?: string,
      to?: string
    ): { normalizedUsername: string; normalizedRepos: string[]; from: string; to: string } {
      const normalizedUsername = this.validateUsernameParameter(username);
      const normalizedRepos = this.validateReposParameter(repos);
      const timeRange = this.validateTimeRangeParameters(from, to);
    
      return {
        normalizedUsername,
        normalizedRepos,
        from: timeRange.from,
        to: timeRange.to
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior by detailing what metrics are aggregated (e.g., PRs with state breakdown, comments with review/issue breakdown, reviews with state breakdown, code changes) and mentions it 'combines data from multiple sources internally,' adding useful context. However, it lacks information on potential limitations like rate limits, error handling, or authentication needs, which slightly reduces transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with a clear purpose statement followed by detailed metrics and use cases. It efficiently conveys comprehensive information without unnecessary fluff. However, the inclusion of example use cases and a detailed returns section, while helpful, adds some length that could be slightly trimmed for optimal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (aggregating multiple metrics) and the absence of annotations and output schema, the description does a good job of providing context. It details the returned stats comprehensively, covering all key metrics. However, it could be more complete by addressing potential behavioral aspects like performance implications or data freshness, which would enhance usability for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing clear details for all four parameters (username, repo, from, to). The description does not add any additional semantic meaning beyond what the schema already specifies, such as format nuances or constraints. According to the rules, with high schema coverage (>80%), the baseline score is 3, as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get comprehensive repository statistics for a user within a time frame') and distinguishes it from sibling tools by emphasizing it's 'the most efficient tool for getting a complete overview' and 'combines data from multiple sources internally.' It explicitly mentions metrics like PRs, comments, reviews, and code changes, making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives by stating 'This is the most efficient tool for getting a complete overview of user activity in a repository' and listing example use cases like performance reviews and developer metrics reports. It implies alternatives (sibling tools like github.getAuthoredPRs) are for more specific, granular queries, making the context clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/radireddy/github-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server