Skip to main content
Glama
AstroMined

PyGithub MCP Server

by AstroMined

add_issue_labels

Add labels to GitHub issues to categorize, prioritize, or organize them. Specify repository details, issue number, and labels to apply.

Instructions

Add labels to an issue.

Args:
    params: Parameters for adding labels including:
        - owner: Repository owner (user or organization)
        - repo: Repository name
        - issue_number: Issue number
        - labels: Labels to add

Returns:
    Updated list of labels from GitHub API

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Implementation Reference

  • MCP tool handler decorated with @tool(). Validates input using AddIssueLabelsParams, delegates to operations.issues.add_issue_labels(), handles GitHub and unexpected errors, formats MCP-compatible response with JSON content or error.
    @tool()
    def add_issue_labels(params: AddIssueLabelsParams) -> dict:
        """Add labels to an issue.
        
        Args:
            params: Parameters for adding labels including:
                - owner: Repository owner (user or organization)
                - repo: Repository name
                - issue_number: Issue number
                - labels: Labels to add
        
        Returns:
            Updated list of labels from GitHub API
        """
        try:
            logger.debug(f"add_issue_labels called with params: {params}")
            # Pass the Pydantic model directly to the operation
            result = issues.add_issue_labels(params)
            logger.debug(f"Got result: {result}")
            return {"content": [{"type": "text", "text": json.dumps(result, indent=2)}]}
        except GitHubError as e:
            logger.error(f"GitHub error: {e}")
            return {
                "content": [{"type": "error", "text": format_github_error(e)}],
                "is_error": True
            }
        except Exception as e:
            logger.error(f"Unexpected error: {e}")
            logger.error(traceback.format_exc())
            error_msg = str(e) if str(e) else "An unexpected error occurred"
            return {
                "content": [{"type": "error", "text": f"Internal server error: {error_msg}"}],
                "is_error": True
            }
  • Pydantic model defining input schema for add_issue_labels tool. Inherits from RepositoryRef (owner/repo), adds issue_number and labels list. Validates labels is non-empty.
    class AddIssueLabelsParams(RepositoryRef):
        """Parameters for adding labels to an issue."""
    
        model_config = ConfigDict(strict=True)
        
        issue_number: int = Field(..., description="Issue number")
        labels: List[str] = Field(..., description="Labels to add")
        
        @field_validator('labels')
        @classmethod
        def validate_labels(cls, v):
            """Validate that labels list is not empty."""
            if not v:
                raise ValueError("labels list cannot be empty")
            return v
  • Registration function that adds add_issue_labels (line 457) along with other issue tools to the MCP server using register_tools.
    def register(mcp: FastMCP) -> None:
        """Register all issue tools with the MCP server.
        
        Args:
            mcp: The MCP server instance
        """
        from pygithub_mcp_server.tools import register_tools
        
        # List of all issue tools to register
        issue_tools = [
            create_issue,
            list_issues,
            get_issue,
            update_issue,
            add_issue_comment,
            list_issue_comments,
            update_issue_comment,
            delete_issue_comment,
            add_issue_labels,
            remove_issue_label,
        ]
        
        register_tools(mcp, issue_tools)
        logger.debug(f"Registered {len(issue_tools)} issue tools")
  • Helper function implementing GitHub API interaction: retrieves repo/issue, calls PyGithub issue.add_to_labels(*labels), fetches updated labels, converts and returns list.
    def add_issue_labels(params: AddIssueLabelsParams) -> List[Dict[str, Any]]:
        """Add labels to an issue.
    
        Args:
            params: Validated parameters for adding labels to an issue
    
        Returns:
            Updated list of labels from GitHub API
    
        Raises:
            GitHubError: If the API request fails
        """
        try:
            client = GitHubClient.get_instance()
            repository = client.get_repo(f"{params.owner}/{params.repo}")
            issue = repository.get_issue(params.issue_number)
    
            # Add labels to the issue
            issue.add_to_labels(*params.labels)
    
            # Get fresh issue data to get updated labels
            updated_issue = repository.get_issue(params.issue_number)
            return [convert_label(label) for label in updated_issue.labels]
    
        except GithubException as e:
            raise GitHubClient.get_instance()._handle_github_exception(e)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions the action ('Add labels') and return value ('Updated list of labels'), but lacks details on permissions needed, error conditions (e.g., invalid labels), whether it's idempotent, or rate limits. This is inadequate for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections for purpose, args, and returns. It's front-loaded with the core action, and each sentence serves a purpose without redundancy. However, the 'Args' section could be more integrated into the flow rather than as a separate list.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations, no output schema, and 0% schema description coverage, the description is incomplete. It covers basic purpose and parameters but lacks critical context: error handling, authentication requirements, side effects (e.g., notifications), and detailed return format beyond 'Updated list of labels'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description compensates by listing all four parameters (owner, repo, issue_number, labels) with brief explanations. However, it doesn't add meaningful semantics beyond what's inferable from parameter names (e.g., format of 'owner', what 'labels' array contains), leaving gaps in understanding parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Add') and resource ('labels to an issue'), making the purpose immediately understandable. It distinguishes itself from sibling tools like 'remove_issue_label' by specifying the additive action, though it doesn't explicitly contrast with other label-related tools that might exist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. While the description implies it's for adding labels to issues, it doesn't mention prerequisites (e.g., issue must exist), when to choose this over 'update_issue' which might also handle labels, or any constraints like label availability in the repository.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AstroMined/pygithub-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server