Skip to main content
Glama
saidsef

GitHub PR Issue Analyser

by saidsef

get_user_activities

Fetch GitHub user activities with optional organization, repository, and date range filters for targeted drill-down.

Instructions

Get user activities with optional filtering by org, repo, and date range using GraphQL API.

This method provides a drill-down capability:

  • username only: Get activities across all repos/orgs

  • username + org: Filter activities to specific organization

  • username + org + repo: Drill down to specific repository

    • date range: Filter by time period (ISO 8601 format: "2024-01-01T00:00:00Z")

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username to fetch activities for
orgNoOptional organization name to filter by
repoNoOptional repository name to filter by (requires org)
sinceNoOptional start date in ISO 8601 format
untilNoOptional end date in ISO 8601 format
max_resultsNoMaximum number of results per category (default: 50)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
usernameYes
date_rangeYes
total_contributionsYes
commitsYes
pull_requestsYes
issuesYes
reviewsYes

Implementation Reference

  • The get_user_activities method in GitHubIntegration class. It fetches user activities (commits, PRs, issues, reviews) from GitHub using a GraphQL query, with optional filtering by org, repo, and date range. Returns a UserActivityResult TypedDict.
    def get_user_activities(
        self,
        username: str,
        org: str = "",
        repo: str = "",
        since: str = "",
        until: str = "",
        max_results: int = 50,
    ) -> UserActivityResult:
        """
        Get user activities with optional filtering by org, repo, and date range using GraphQL API.
    
        This method provides a drill-down capability:
        - username only: Get activities across all repos/orgs
        - username + org: Filter activities to specific organization
        - username + org + repo: Drill down to specific repository
        - + date range: Filter by time period (ISO 8601 format: "2024-01-01T00:00:00Z")
    
        Args:
            username: GitHub username to fetch activities for
            org: Optional organization name to filter by
            repo: Optional repository name to filter by (requires org)
            since: Optional start date in ISO 8601 format
            until: Optional end date in ISO 8601 format
            max_results: Maximum number of results per category (default: 50)
    
        Returns:
            UserActivityResult: Dictionary containing:
                - username: The GitHub username
                - date_range: Applied date filter (since/until) if any
                - total_contributions: Summary counts for commits, PRs, issues, reviews
                - commits: List of commit contributions
                - pull_requests: List of PR contributions
                - issues: List of issue contributions
                - reviews: List of PR review contributions
    
        Raises:
            GitHubNotFoundError: If the user, org, or repo is not found
            GitHubAPIError: If the API request fails
        """
        logger.info(f"Fetching user activities for {username} (org={org}, repo={repo}, since={since}, until={until})")
    
        try:
            variables: dict[str, Any] = {"username": username}
            if since:
                variables["since"] = since
            if until:
                variables["until"] = until
    
            # Execute the contributions query
            result = self.graphql.execute_query(
                USER_CONTRIBUTIONS_QUERY,
                variables=variables,
                token=self._resolve_token(),
            )
    
            user_data = result.get("user")
            if not user_data:
                raise GitHubNotFoundError(f"User '{username}' not found")
    
            collection = user_data.get("contributionsCollection", {})
    
            # Build date_range info
            date_range = None
            if since or until:
                date_range = {
                    "since": since if since else collection.get("startedAt", ""),
                    "until": until if until else collection.get("endedAt", ""),
                }
    
            # Process contributions and apply org/repo filters
            commits = []
            pull_requests = []
            issues = []
            reviews = []
    
            # Process commit contributions
            for repo_contrib in collection.get("commitContributionsByRepository", []):
                repo_info = repo_contrib["repository"]
                owner = repo_info["owner"]["login"]
                repo_name = repo_info["name"]
    
                # Apply org filter
                if org and owner.lower() != org.lower():
                    continue
                # Apply repo filter
                if repo and repo_name.lower() != repo.lower():
                    continue
    
                for contrib in repo_contrib.get("contributions", {}).get("nodes", []):
                    if len(commits) >= max_results:
                        break
                    commits.append(
                        {
                            "repo": repo_name,
                            "owner": owner,
                            "commit_count": contrib.get("commitCount", 0),
                            "url": contrib.get("url", ""),
                            "date": contrib.get("occurredAt", ""),
                        }
                    )
    
            # Process PR contributions
            for repo_contrib in collection.get("pullRequestContributionsByRepository", []):
                repo_info = repo_contrib["repository"]
                owner = repo_info["owner"]["login"]
                repo_name = repo_info["name"]
    
                # Apply filters
                if org and owner.lower() != org.lower():
                    continue
                if repo and repo_name.lower() != repo.lower():
                    continue
    
                for contrib in repo_contrib.get("contributions", {}).get("nodes", []):
                    if len(pull_requests) >= max_results:
                        break
                    pr = contrib["pullRequest"]
                    pull_requests.append(
                        {
                            "repo": repo_name,
                            "owner": owner,
                            "number": pr["number"],
                            "title": pr["title"],
                            "state": pr["state"],
                            "url": pr["url"],
                            "created": pr["createdAt"],
                            "merged": pr.get("merged", False),
                        }
                    )
    
            # Process issue contributions
            for repo_contrib in collection.get("issueContributionsByRepository", []):
                repo_info = repo_contrib["repository"]
                owner = repo_info["owner"]["login"]
                repo_name = repo_info["name"]
    
                # Apply filters
                if org and owner.lower() != org.lower():
                    continue
                if repo and repo_name.lower() != repo.lower():
                    continue
    
                for contrib in repo_contrib.get("contributions", {}).get("nodes", []):
                    if len(issues) >= max_results:
                        break
                    issue = contrib["issue"]
                    issues.append(
                        {
                            "repo": repo_name,
                            "owner": owner,
                            "number": issue["number"],
                            "title": issue["title"],
                            "state": issue["state"],
                            "url": issue["url"],
                            "created": issue["createdAt"],
                        }
                    )
    
            # Process review contributions
            for repo_contrib in collection.get("pullRequestReviewContributionsByRepository", []):
                repo_info = repo_contrib["repository"]
                owner = repo_info["owner"]["login"]
                repo_name = repo_info["name"]
    
                # Apply filters
                if org and owner.lower() != org.lower():
                    continue
                if repo and repo_name.lower() != repo.lower():
                    continue
    
                for contrib in repo_contrib.get("contributions", {}).get("nodes", []):
                    if len(reviews) >= max_results:
                        break
                    review = contrib["pullRequestReview"]
                    pr = contrib["pullRequest"]
                    reviews.append(
                        {
                            "repo": repo_name,
                            "owner": owner,
                            "pr_number": pr["number"],
                            "pr_title": pr["title"],
                            "pr_url": pr["url"],
                            "review_state": review["state"],
                            "review_url": review["url"],
                            "date": contrib["occurredAt"],
                        }
                    )
    
            activity_result: UserActivityResult = {
                "username": username,
                "date_range": date_range,
                "total_contributions": {
                    "commits": collection.get("totalCommitContributions", 0),
                    "pull_requests": collection.get("totalPullRequestContributions", 0),
                    "issues": collection.get("totalIssueContributions", 0),
                    "reviews": collection.get("totalPullRequestReviewContributions", 0),
                },
                "commits": commits,
                "pull_requests": pull_requests,
                "issues": issues,
                "reviews": reviews,
            }
    
            logger.info(
                f"Successfully fetched activities: {len(commits)} commits, "
                f"{len(pull_requests)} PRs, {len(issues)} issues, {len(reviews)} reviews"
            )
            return activity_result
    
        except GitHubNotFoundError:
            raise
        except Exception as e:
            logger.error(f"Error fetching user activities for {username}: {e}")
            raise GitHubAPIError(f"Failed to fetch user activities: {e}") from e
  • UserActivityResult TypedDict defining the return type for get_user_activities, including username, date_range, total_contributions, commits, pull_requests, issues, and reviews fields.
    type UserActivityResult = TypedDict(
        "UserActivityResult",
        {  # pyright: ignore[reportInvalidTypeForm]
            "username": str,
            "date_range": dict[str, str] | None,
            "total_contributions": dict[str, int],
            "commits": list[dict[str, Any]],
            "pull_requests": list[dict[str, Any]],
            "issues": list[dict[str, Any]],
            "reviews": list[dict[str, Any]],
        },
    )
  • USER_CONTRIBUTIONS_QUERY GraphQL query used by get_user_activities to fetch commit, PR, issue, and review contributions from GitHub API v4.
    USER_CONTRIBUTIONS_QUERY = """
    query($username: String!, $since: DateTime, $until: DateTime) {
      user(login: $username) {
        login
        contributionsCollection(from: $since, to: $until) {
          startedAt
          endedAt
          totalCommitContributions
          totalPullRequestContributions
          totalIssueContributions
          totalPullRequestReviewContributions
          commitContributionsByRepository(maxRepositories: 100) {
            repository {
              name
              owner {
                login
              }
              url
            }
            contributions(first: 100) {
              totalCount
              nodes {
                occurredAt
                commitCount
                url
              }
            }
          }
          pullRequestContributionsByRepository(maxRepositories: 100) {
            repository {
              name
              owner {
                login
              }
              url
            }
            contributions(first: 100) {
              totalCount
              nodes {
                occurredAt
                pullRequest {
                  number
                  title
                  state
                  url
                  createdAt
                  merged
                }
              }
            }
          }
          issueContributionsByRepository(maxRepositories: 100) {
            repository {
              name
              owner {
                login
              }
              url
            }
            contributions(first: 100) {
              totalCount
              nodes {
                occurredAt
                issue {
                  number
                  title
                  state
                  url
                  createdAt
                }
              }
            }
          }
          pullRequestReviewContributionsByRepository(maxRepositories: 100) {
            repository {
              name
              owner {
                login
              }
              url
            }
            contributions(first: 100) {
              totalCount
              nodes {
                occurredAt
                pullRequest {
                  number
                  title
                  url
                }
                pullRequestReview {
                  state
                  url
                }
              }
            }
          }
        }
      }
    }
    """
  • The _register_tools and register_tools methods in PRIssueAnalyser that dynamically register all public methods from GitHubIntegration (including get_user_activities) as MCP tools via self.mcp.add_tool().
    def _register_tools(self):
        self.register_tools(self.gi)
        self.register_tools(self.ip)
        self.mcp.add_provider(SkillsDirectoryProvider(Path(__file__).parent / "skills"))
    
    def register_tools(self, methods: Any = None) -> None:
        for name in dir(methods):
            if name.startswith("_"):
                continue
            method = getattr(methods, name)
            if inspect.isroutine(method):
                self.mcp.add_tool(method)
  • UserSearchResult TypedDict used for the search_user method (companion to get_user_activities in the user-related functionality).
    type UserSearchResult = TypedDict(
        "UserSearchResult",
        {  # pyright: ignore[reportInvalidTypeForm]
            "login": str,
            "name": str | None,
            "email": str | None,
            "company": str | None,
            "location": str | None,
            "bio": str | None,
            "url": str,
            "avatar_url": str | None,
            "created_at": str,
            "updated_at": str,
            "followers": int,
            "following": int,
            "public_repos": int,
            "recent_repos": list[dict[str, Any]],
            "organizations": list[dict[str, Any]],
        },
    )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It mentions using GraphQL API but does not address rate limits, pagination behavior, error handling, or data freshness. The max_results parameter hints at limiting but no details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences and a bullet list. Information is front-loaded with the core purpose, and every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool complexity (6 parameters, drill-down logic) and the presence of an output schema, the description covers filtering patterns and date format. It could include default ordering or error scenarios, but is sufficient for a query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by explaining drill-down patterns and specifying ISO 8601 date format, going beyond schema descriptions. This enhances the agent's understanding of how parameters interact.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get user activities' and details the drill-down filtering capabilities by org, repo, and date range. It clearly distinguishes this tool from siblings like list_open_issues_prs or get_pr_content by focusing on user activity data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage patterns (username only, +org, +repo, +date range) that guide the agent on parameter combinations. However, it lacks when-not-to-use guidance or alternatives for similar tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saidsef/mcp-github-pr-issue-analyser'

If you have feedback or need assistance with the MCP directory API, please join our Discord server