Skip to main content
Glama
AstroMined

PyGithub MCP Server

by AstroMined

search_repositories

Search GitHub repositories using customizable parameters like query, page number, and results per page to find relevant code projects.

Instructions

Search for GitHub repositories.

Args:
    params: Dictionary with search parameters
        - query: Search query
        - page: Page number for pagination (optional)
        - per_page: Results per page (optional)

Returns:
    MCP response with matching repositories

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • The MCP tool handler function decorated with @tool(). It receives parameters as dict, validates using SearchRepositoriesParams, delegates to the repositories operation, handles errors, and returns MCP-formatted response.
    @tool()
    def search_repositories(params: Dict) -> Dict:
        """Search for GitHub repositories.
    
        Args:
            params: Dictionary with search parameters
                - query: Search query
                - page: Page number for pagination (optional)
                - per_page: Results per page (optional)
    
        Returns:
            MCP response with matching repositories
        """
        try:
            logger.debug(f"search_repositories called with params: {params}")
            # Convert dict to Pydantic model
            search_params = SearchRepositoriesParams(**params)
            
            # Call operation
            result = repositories.search_repositories(search_params)
            
            logger.debug(f"Got {len(result)} results")
            return {
                "content": [{"type": "text", "text": json.dumps(result, indent=2)}]
            }
        except ValidationError as e:
            logger.error(f"Validation error: {e}")
            return {
                "content": [{"type": "error", "text": f"Validation error: {str(e)}"}],
                "is_error": True
            }
        except GitHubError as e:
            logger.error(f"GitHub error: {e}")
            return {
                "content": [{"type": "error", "text": format_github_error(e)}],
                "is_error": True
            }
        except Exception as e:
            logger.error(f"Unexpected error: {e}")
            logger.error(traceback.format_exc())
            error_msg = str(e) if str(e) else "An unexpected error occurred"
            return {
                "content": [{"type": "error", "text": f"Internal server error: {error_msg}"}],
                "is_error": True
            }
  • Pydantic BaseModel defining the input schema for the search_repositories tool, including query (required), page and per_page (optional) with validation rules.
    class SearchRepositoriesParams(BaseModel):
        """Parameters for searching repositories."""
    
        model_config = ConfigDict(strict=True)
        
        query: str = Field(..., description="Search query")
        page: Optional[int] = Field(None, description="Page number for pagination")
        per_page: Optional[int] = Field(
            None, description="Number of results per page (default: 30, max: 100)"
        )
    
        @field_validator('query')
        @classmethod
        def validate_query(cls, v):
            """Validate that query is not empty."""
            if not v.strip():
                raise ValueError("query cannot be empty")
            return v
    
        @field_validator('page')
        @classmethod
        def validate_page(cls, v):
            """Validate that page is a positive integer."""
            if v is not None and v < 1:
                raise ValueError("page must be a positive integer")
            return v
    
        @field_validator('per_page')
        @classmethod
        def validate_per_page(cls, v):
            """Validate that per_page is within allowed range."""
            if v is not None:
                if v < 1:
                    raise ValueError("per_page must be a positive integer")
                if v > 100:
                    raise ValueError("per_page cannot exceed 100")
            return v
  • Registration function that imports the search_repositories handler and registers it (along with other repository tools) with the MCP server instance using register_tools.
    def register(mcp: FastMCP) -> None:
        """Register all repository tools with the MCP server.
    
        Args:
            mcp: The MCP server instance
        """
        from pygithub_mcp_server.tools import register_tools
        from .tools import (
            get_repository,
            create_repository,
            fork_repository,
            search_repositories,
            get_file_contents,
            create_or_update_file,
            push_files,
            create_branch,
            list_commits
        )
    
        # Register all repository tools
        register_tools(mcp, [
            get_repository,
            create_repository,
            fork_repository,
            search_repositories,
            get_file_contents,
            create_or_update_file,
            push_files,
            create_branch,
            list_commits
        ])
  • Helper function implementing the core search logic: performs GitHub API search, handles pagination, converts results to internal schema, and raises GitHubError on failure.
    def search_repositories(params: SearchRepositoriesParams) -> List[Dict[str, Any]]:
        """Search for repositories.
    
        Args:
            params: Parameters for searching repositories
    
        Returns:
            List of matching repositories in our schema
    
        Raises:
            GitHubError: If repository search fails
        """
        logger.debug(f"Searching repositories with query: {params.query}")
        try:
            client = GitHubClient.get_instance()
            github = client.github
            
            # Search repositories
            paginated_repos = github.search_repositories(query=params.query)
            
            # Handle pagination
            repos = get_paginated_items(paginated_repos, params.page, params.per_page)
            
            # Convert repositories to our schema
            return [convert_repository(repo) for repo in repos]
        except GithubException as e:
            logger.error(f"GitHub exception when searching repositories: {str(e)}")
            raise client._handle_github_exception(e, resource_hint="repository")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions pagination parameters (page, per_page), it doesn't describe rate limits, authentication requirements, error conditions, or what the 'MCP response' contains (e.g., structure, fields). For a search tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns) and uses bullet points for parameters. It's front-loaded with the core purpose. However, the 'Returns' section is vague ('MCP response with matching repositories'), and some sentences could be more precise (e.g., specifying what 'query' supports).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (search operation with pagination), lack of annotations, no output schema, and low schema coverage (0%), the description is incomplete. It doesn't explain the response format, error handling, or important constraints like rate limits or query syntax. For a tool with one nested parameter object, more context is needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful parameter details beyond the schema: it explains that 'params' is a dictionary containing 'query', 'page', and 'per_page'. However, schema description coverage is 0%, and the description doesn't cover all potential parameters (e.g., sort, order, language filters mentioned in GitHub's API). It partially compensates but doesn't fully bridge the coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for GitHub repositories.' This is a specific verb+resource combination that distinguishes it from siblings like get_repository (which fetches a single repository) or create_repository (which creates new ones). However, it doesn't explicitly differentiate from list_issues or list_commits, which are also search/list operations but on different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer search_repositories over get_repository (for single repo lookup) or list_issues (for issue searches), nor does it specify prerequisites like authentication needs or rate limits. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AstroMined/pygithub-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server