Skip to main content
Glama
saidsef

GitHub PR Issue Analyser

by saidsef

list_open_issues_prs

Retrieve open pull requests or issues from GitHub repositories to monitor development progress and manage contributions.

Instructions

Lists open pull requests or issues for a specified GitHub repository owner. Args: repo_owner (str): The owner of the repository. issue (Literal['pr', 'issue']): The type of items to list, either 'pr' for pull requests or 'issue' for issues. Defaults to 'pr'. filtering (Literal['user', 'owner', 'involves']): The filtering criteria for the search. Defaults to 'involves'. per_page (Annotated[int, PerPage]): The number of results to return per page, range 1-100. Defaults to 50. page (int): The page number to retrieve. Defaults to 1. Returns: Dict[str, Any]: A dictionary containing the list of open pull requests or issues, depending on the value of the issue parameter. None: If an error occurs during the request. Error Handling: Logs an error message and prints the traceback if the request fails or an exception is raised.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
repo_ownerYes
issueNopr
filteringNoinvolves
per_pageNo
pageNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function implementing the 'list_open_issues_prs' tool logic. It queries GitHub's search API to retrieve open PRs or issues filtered by repo owner, with pagination and customizable filtering options.
    def list_open_issues_prs(
            self,
            repo_owner: str,
            issue: Literal['pr', 'issue'] = 'pr',
            filtering: Literal['user', 'owner', 'involves'] = 'involves',
            per_page: Annotated[PerPage, "Number of results per page (1-100)"] = 50,
            page: int = 1
    ) -> Dict[str, Any]:
        """
        Lists open pull requests or issues for a specified GitHub repository owner.
        Args:
            repo_owner (str): The owner of the repository.
            issue (Literal['pr', 'issue']): The type of items to list, either 'pr' for pull requests or 'issue' for issues. Defaults to 'pr'.
            filtering (Literal['user', 'owner', 'involves']): The filtering criteria for the search. Defaults to 'involves'.
            per_page (Annotated[int, PerPage]): The number of results to return per page, range 1-100. Defaults to 50.
            page (int): The page number to retrieve. Defaults to 1.
        Returns:
            Dict[str, Any]: A dictionary containing the list of open pull requests or issues, depending on the value of the `issue` parameter.
            None: If an error occurs during the request.
        Error Handling:
            Logs an error message and prints the traceback if the request fails or an exception is raised.
        """
        logging.info(f"Listing open {issue}s for {repo_owner}")
    
        # Construct the search URL
        search_url = f"https://api.github.com/search/issues?q=is:{issue}+is:open+{filtering}:{repo_owner}&per_page={per_page}&page={page}"
    
        try:
            response = requests.get(search_url, headers=self._get_headers(), timeout=TIMEOUT)
            response.raise_for_status()
            pr_data = response.json()
            open_prs = {
                "total": pr_data['total_count'],
                f"open_{issue}s": [
                    {
                        "url": item['html_url'],
                        "title": item['title'],
                        "number": item['number'],
                        "state": item['state'],
                        "created_at": item['created_at'],
                        "updated_at": item['updated_at'],
                        "author": item['user']['login'],
                        "label_names": [label['name'] for label in item.get('labels', [])],
                        "is_draft": item.get('draft', False),
                    }
                    for item in pr_data['items']
                ]
            }
    
            logging.info(f"Open {issue}s listed successfully")
            return open_prs
    
        except Exception as e:
            logging.error(f"Error listing open {issue}s: {str(e)}")
            traceback.print_exc()
            return {"status": "error", "message": str(e)}
  • Dynamic registration of all public methods from the GitHubIntegration instance (including list_open_issues_prs) as MCP tools via FastMCP.add_tool().
    def _register_tools(self):
        self.register_tools(self.gi)
        self.register_tools(self.ip)
    
    def register_tools(self, methods: Any = None) -> None:
        for name, method in inspect.getmembers(methods):
            if (inspect.isfunction(method) or inspect.ismethod(method)) and not name.startswith("_"):
                self.mcp.add_tool(method)
  • Pydantic validator defining the schema constraint for the 'per_page' parameter (1-100). Used in the tool's Annotated type hint.
    PerPage = conint(ge=1, le=100)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions error handling ('Logs an error message and prints the traceback if the request fails') and pagination behavior via 'per_page' and 'page' parameters, but lacks critical details like rate limits, authentication requirements, whether it's a read-only operation, or how filtering criteria ('user', 'owner', 'involves') actually work in practice. For a tool with 5 parameters and no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, Args, Returns, Error Handling) and uses bullet-like formatting for parameters. It's appropriately sized at about 150 words. However, the 'Error Handling' section could be more concise, and some parameter explanations are brief without being overly verbose, keeping it efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 0% schema coverage and no annotations, the description provides basic parameter documentation and return value information ('Dict[str, Any]...'). However, it lacks details on authentication, rate limits, pagination behavior beyond parameters, and how filtering works. The output schema exists but isn't detailed here, and the description doesn't fully compensate for the missing behavioral context, making it adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides basic explanations for all 5 parameters in the 'Args' section, clarifying their purposes and defaults. However, it doesn't fully explain the semantics of 'filtering' criteria (what 'user', 'owner', 'involves' mean operationally) or provide examples. The parameter documentation is present but lacks depth, meeting the baseline for having some information when schema coverage is poor.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Lists open pull requests or issues for a specified GitHub repository owner.' It specifies the verb ('Lists'), resource ('open pull requests or issues'), and scope ('for a specified GitHub repository owner'). However, it doesn't explicitly differentiate from siblings like 'get_pr_content' or 'get_user_org_activity' which have different purposes, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_user_org_activity' for broader activity queries or 'create_issue'/'create_pr' for creation operations. There's no context about prerequisites, such as authentication needs or repository access requirements, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saidsef/mcp-github-pr-issue-analyser'

If you have feedback or need assistance with the MCP directory API, please join our Discord server