Skip to main content
Glama

github_get_pr_files

Retrieve files changed in a GitHub pull request with pagination support. Specify repository owner, name, and PR number to fetch file details, including optional patch data, for efficient review.

Instructions

Get files changed in a pull request with pagination support

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
include_patchNo
pageNo
per_pageNo
pr_numberYes
repo_nameYes
repo_ownerYes

Implementation Reference

  • Main handler function that implements the github_get_pr_files tool. Fetches changed files from GitHub PR API endpoint, formats output with status emojis, change counts, optional patches with memory management via PatchMemoryManager to prevent token overflow.
    async def github_get_pr_files( repo_owner: str, repo_name: str, pr_number: int, per_page: int = 30, page: int = 1, include_patch: bool = False, ) -> str: """Get files changed in a pull request with memory-aware patch handling""" try: async with github_client_context() as client: params = {"per_page": per_page, "page": page} response = await client.get( f"/repos/{repo_owner}/{repo_name}/pulls/{pr_number}/files", params=params, ) if response.status != 200: return f"❌ Failed to get PR files: {response.status}" files = await response.json() if not files: return f"No files found for PR #{pr_number}" output = [f"Files changed in PR #{pr_number}:\n"] total_additions = 0 total_deletions = 0 # Initialize memory manager for patch processing patch_manager = PatchMemoryManager( max_patch_size=1000, max_total_memory=50000 ) for file in files: status_emoji = { "added": "βž•", "modified": "πŸ“", "removed": "βž–", "renamed": "πŸ“", }.get(file.get("status"), "❓") additions = file.get("additions", 0) deletions = file.get("deletions", 0) total_additions += additions total_deletions += deletions output.append( f"{status_emoji} {file['filename']} (+{additions}, -{deletions})" ) if include_patch and file.get("patch"): # Use memory manager to safely process patch content processed_patch, was_truncated = patch_manager.process_patch( file["patch"] ) output.append(processed_patch) if was_truncated: logger.info( f"Patch for {file['filename']} was truncated or skipped for memory management" ) output.append("") output.append(f"Total: +{total_additions}, -{total_deletions}") # Add memory usage summary if patches were included if include_patch: output.append( f"\nMemory usage: {patch_manager.current_memory_usage}/{patch_manager.max_total_memory} bytes" ) output.append(f"Patches processed: {patch_manager.patches_processed}") return "\n".join(output) except ValueError as auth_error: logger.error(f"Authentication error getting PR files: {auth_error}") return f"❌ {str(auth_error)}" except ConnectionError as conn_error: logger.error(f"Connection error getting PR files: {conn_error}") return f"❌ Network connection failed: {str(conn_error)}" except Exception as e: logger.error( f"Unexpected error getting PR files for PR #{pr_number}: {e}", exc_info=True ) return f"❌ Error getting PR files: {str(e)}"
  • Pydantic input schema/model for validating parameters to the github_get_pr_files tool.
    class GitHubGetPRFiles(BaseModel): repo_owner: str repo_name: str pr_number: int per_page: int = 30 page: int = 1 include_patch: bool = False
  • ToolDefinition registration in the default GitHub tools list within ToolRegistry.initialize_default_tools(), associating name, description, schema, and metadata.
    name=GitTools.GITHUB_GET_PR_FILES, category=ToolCategory.GITHUB, description="Get files changed in a pull request", schema=GitHubGetPRFiles, handler=placeholder_handler, requires_repo=False, requires_github_token=True, ),
  • Handler wrapper registration in CallToolHandler._get_github_handlers(), creating a decorated async handler wrapper that calls the actual github_get_pr_files function from github.api.
    "github_get_pr_files": self._create_github_handler( github_get_pr_files, [ "repo_owner", "repo_name", "pr_number", "per_page", "page", "include_patch", ], ),
  • Supporting class used by the handler for memory-aware management of patch content to avoid exceeding token limits when including large diffs.
    class PatchMemoryManager: """Memory-aware patch content manager with configurable limits and streaming support.""" def __init__(self, max_patch_size: int = 1000, max_total_memory: int = 50000): self.max_patch_size = max_patch_size self.max_total_memory = max_total_memory self.current_memory_usage = 0 self.patches_processed = 0 def can_include_patch(self, patch_size: int) -> bool: """Check if patch can be included within memory constraints.""" return (self.current_memory_usage + patch_size) <= self.max_total_memory def process_patch(self, patch_content: str) -> tuple[str, bool]: """Process patch content with memory management and truncation. Returns: tuple[str, bool]: (processed_content, was_truncated) """ patch_size = len(patch_content) self.patches_processed += 1 # Check memory budget first if not self.can_include_patch(patch_size): logger.warning( f"Patch #{self.patches_processed} skipped: exceeds memory budget ({patch_size} bytes, {self.current_memory_usage}/{self.max_total_memory} used)" ) return ( f"[Patch skipped - memory limit reached ({self.current_memory_usage}/{self.max_total_memory} bytes used)]", True, ) # Apply individual patch size limit if patch_size > self.max_patch_size: truncated_patch = patch_content[: self.max_patch_size] self.current_memory_usage += self.max_patch_size logger.info( f"Patch #{self.patches_processed} truncated: {patch_size} -> {self.max_patch_size} bytes" ) return ( f"```diff\n{truncated_patch}\n... [truncated {patch_size - self.max_patch_size} chars]\n```", True, ) else: self.current_memory_usage += patch_size return f"```diff\n{patch_content}\n```", False

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MementoRC/mcp-git'

If you have feedback or need assistance with the MCP directory API, please join our Discord server