Skip to main content
Glama
MementoRC

MCP Git Server

by MementoRC

github_get_pr_checks

Retrieve check run details for a pull request on GitHub, including status and conclusion, to monitor and verify automated workflows in a specified repository.

Instructions

Get check runs for a pull request

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
conclusionNo
pr_numberYes
repo_nameYes
repo_ownerYes
statusNo

Implementation Reference

  • Main handler function that implements the github_get_pr_checks tool by fetching the PR head SHA, retrieving check runs from GitHub API, optionally filtering by status/conclusion, and formatting a readable output with emojis and details.
    async def github_get_pr_checks(
        repo_owner: str,
        repo_name: str,
        pr_number: int,
        status: str | None = None,
        conclusion: str | None = None,
    ) -> str:
        """Get check runs for a pull request"""
        try:
            async with github_client_context() as client:
                # First get the PR to get the head SHA
                pr_response = await client.get(
                    f"/repos/{repo_owner}/{repo_name}/pulls/{pr_number}"
                )
                if pr_response.status != 200:
                    return f"❌ Failed to get PR #{pr_number}: {pr_response.status}"
    
                pr_data = await pr_response.json()
                head_sha = pr_data["head"]["sha"]
    
                # Get check runs for the head commit
                params = {}
                if status:
                    params["status"] = status
    
                checks_response = await client.get(
                    f"/repos/{repo_owner}/{repo_name}/commits/{head_sha}/check-runs",
                    params=params,
                )
                if checks_response.status != 200:
                    return f"❌ Failed to get check runs: {checks_response.status}"
    
                checks_data = await checks_response.json()
    
                # Filter by conclusion if specified
                check_runs = checks_data.get("check_runs", [])
                if conclusion:
                    check_runs = [
                        run for run in check_runs if run.get("conclusion") == conclusion
                    ]
    
                # Format the output
                if not check_runs:
                    return f"No check runs found for PR #{pr_number}"
    
                output = [f"Check runs for PR #{pr_number} (commit {head_sha[:8]}):\n"]
    
                for run in check_runs:
                    status_emoji = {
                        "completed": "✅" if run.get("conclusion") == "success" else "❌",
                        "in_progress": "🔄",
                        "queued": "⏳",
                    }.get(run["status"], "❓")
    
                    output.append(f"{status_emoji} {run['name']}")
                    output.append(f"   Status: {run['status']}")
                    if run.get("conclusion"):
                        output.append(f"   Conclusion: {run['conclusion']}")
                    output.append(f"   Started: {run.get('started_at', 'N/A')}")
                    if run.get("completed_at"):
                        output.append(f"   Completed: {run['completed_at']}")
                    if run.get("html_url"):
                        output.append(f"   URL: {run['html_url']}")
                    output.append("")
    
                return "\n".join(output)
    
        except ValueError as auth_error:
            # Handle authentication/configuration errors specifically
            logger.error(f"Authentication error getting PR checks: {auth_error}")
            return f"❌ {str(auth_error)}"
        except ConnectionError as conn_error:
            # Handle network connectivity issues
            logger.error(f"Connection error getting PR checks: {conn_error}")
            return f"❌ Network connection failed: {str(conn_error)}"
        except Exception as e:
            # Log unexpected errors with full context for debugging
            logger.error(
                f"Unexpected error getting PR checks for PR #{pr_number}: {e}",
                exc_info=True,
            )
            return f"❌ Error getting PR checks: {str(e)}"
  • Pydantic model defining the input schema for the github_get_pr_checks tool, including required repo details, PR number, and optional filters for status and conclusion.
    class GitHubGetPRChecks(BaseModel):
        repo_owner: str
        repo_name: str
        pr_number: int
        status: str | None = None
        conclusion: str | None = None
  • ToolDefinition registration in the central tool registry, specifying name, category, description, schema, placeholder handler, and requirements (no repo needed, requires GitHub token).
    ToolDefinition(
        name=GitTools.GITHUB_GET_PR_CHECKS,
        category=ToolCategory.GITHUB,
        description="Get check runs for a pull request",
        schema=GitHubGetPRChecks,
        handler=placeholder_handler,
        requires_repo=False,
        requires_github_token=True,
    ),
  • Handler wrapper creation and registration in the GitToolRouter, binding the actual github_get_pr_checks function from github.api with expected argument order and error handling decorators.
    "github_get_pr_checks": self._create_github_handler(
        github_get_pr_checks,
        ["repo_owner", "repo_name", "pr_number", "status", "conclusion"],
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it behaves: no information about rate limits, authentication requirements, pagination, error handling, or what format the check runs are returned in. This is inadequate for a tool that likely interacts with an external API.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no wasted words. It's appropriately sized for a simple tool and front-loads the core purpose immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain the return values, error conditions, or how the optional filters work. For a GitHub API tool that likely returns structured data about check runs, this leaves significant gaps for an AI agent to understand proper usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what 'repo_owner', 'repo_name', or 'pr_number' should be, or that 'conclusion' and 'status' are optional filters for check runs. The description fails to provide any semantic context beyond what the bare schema titles offer.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('check runs for a pull request'), making the purpose understandable. It distinguishes from some siblings like 'github_get_pr_details' or 'github_get_pr_status' by focusing specifically on check runs, though it doesn't explicitly contrast with 'github_get_failing_jobs' which might overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. It doesn't mention when to choose this over 'github_get_pr_status' (which might include status checks) or 'github_get_failing_jobs' (which might focus on failed checks). The description lacks any context about prerequisites or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MementoRC/mcp-git'

If you have feedback or need assistance with the MCP directory API, please join our Discord server