github
Server Details
GitHub MCP — wraps the GitHub public REST API (no auth required for public endpoints)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-github
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose: get_repo retrieves repository details, get_user fetches user profiles, list_repo_issues handles issue listings, and search_repos performs repository searches. There is no overlap in functionality, making tool selection straightforward for an agent.
All tool names follow a consistent verb_noun pattern (get_repo, get_user, list_repo_issues, search_repos) using snake_case. This uniformity enhances readability and predictability, with no deviations in naming conventions.
With only 4 tools, the server feels somewhat thin for a GitHub integration, lacking common operations like creating or updating repositories, issues, or pull requests. While the tools are useful, the scope is limited compared to typical GitHub API coverage.
The tool set is severely incomplete for GitHub operations, missing essential CRUD actions such as create_repo, update_issue, or create_pull_request. This will likely cause agent failures when attempting full workflows, as it only supports read-only queries and searches.
Available Tools
4 toolsget_repoAInspect
Get full details for a specific GitHub repository by owner and repo name. Returns description, stars, forks, language, topics, license, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| repo | Yes | Repository name, e.g. "react" | |
| owner | Yes | Repository owner (user or org), e.g. "facebook" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get') and lists example return fields, but does not cover aspects like rate limits, authentication needs, error handling, or pagination. The description adds basic context but lacks depth for a tool with no annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the action and parameters, and the second lists example return data. Every sentence adds value without redundancy, and it is front-loaded with the core purpose. No wasted words or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and parameter context but lacks completeness. It does not explain the full return structure, error cases, or behavioral traits like rate limits. For a read tool with 2 parameters, this is adequate but has clear gaps in operational guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear documentation of both required parameters (owner and repo) including examples. The description adds minimal value beyond the schema by mentioning these parameters in context ('by owner and repo name'), but does not provide additional syntax, format details, or constraints. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details'), resource ('GitHub repository'), and scope ('by owner and repo name'), distinguishing it from siblings like get_user (user-focused) and list_repo_issues/issues-focused). It provides concrete examples of returned data like stars and language, making the purpose explicit and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving detailed repository information when owner and repo name are known, but does not explicitly state when to use this tool versus alternatives like search_repos (for broader searches) or get_user (for user data). No exclusions or prerequisites are mentioned, leaving some ambiguity in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_userCInspect
Get the public profile of a GitHub user. Returns login, name, bio, company, location, public repos count, followers, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| username | Yes | GitHub username, e.g. "torvalds" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns public profile data, implying a read-only operation, but doesn't specify authentication needs, rate limits, error conditions, or whether it's safe for repeated use. This leaves significant gaps in understanding the tool's behavior beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, starting with the core purpose in the first sentence. The second sentence efficiently lists key return fields without unnecessary elaboration. There's minimal waste, though it could be slightly more structured by explicitly separating purpose from output details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and output fields but lacks behavioral context like authentication or error handling. Without annotations or an output schema, more detail on usage and limitations would improve completeness for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'username' parameter clearly documented. The description doesn't add any parameter-specific details beyond what the schema provides, such as format constraints or examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the public profile of a GitHub user.' It specifies the verb ('Get') and resource ('public profile of a GitHub user'), making the action and target explicit. However, it doesn't distinguish this tool from potential siblings like 'get_repo' beyond the resource type, which keeps it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions what data is returned but offers no context on prerequisites, limitations, or comparisons to sibling tools like 'get_repo' or 'search_repos'. This lack of usage context leaves the agent without clear direction for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_repo_issuesCInspect
List issues for a GitHub repository. Returns title, number, state, labels, and created_at for each issue.
| Name | Required | Description | Default |
|---|---|---|---|
| repo | Yes | Repository name | |
| owner | Yes | Repository owner (user or org) | |
| state | No | Filter by issue state: open, closed, or all (default: open) | |
| per_page | No | Number of issues to return (default 10, max 30) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return fields (title, number, state, labels, created_at) but fails to cover critical aspects like pagination behavior (implied by 'per_page' parameter), rate limits, authentication needs, or error handling, leaving significant gaps for a tool that interacts with an external API like GitHub.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and key return fields, with no wasted words. It is appropriately sized for the tool's complexity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (interacting with GitHub API, 4 parameters, no output schema), the description is incomplete. It lacks details on output structure beyond listed fields, pagination, error cases, or API constraints, which are crucial for effective use without annotations or output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting all parameters clearly. The description adds no additional meaning beyond the schema, such as explaining parameter interactions or usage examples, so it meets the baseline for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('issues for a GitHub repository'), making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'get_repo' or 'search_repos', which might also involve repository data, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'search_repos' or 'get_repo', nor does it mention any prerequisites or exclusions. It lacks explicit context for tool selection, leaving usage implied at best.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_reposAInspect
Search GitHub repositories by keyword. Returns name, full_name, description, stars, forks, language, and URL for the top results.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort results by: stars, forks, or updated (default: stars) | |
| query | Yes | Search query string (e.g., "react hooks", "cli tool language:go") | |
| per_page | No | Number of results to return (default 10, max 30) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return format (name, full_name, etc.) and scope ('top results'), which adds value beyond the schema. However, it lacks details on rate limits, authentication needs, pagination, or error handling, which are important for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and resource, followed by key return details. Every word earns its place with no redundancy or unnecessary elaboration, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with 3 parameters) and no annotations or output schema, the description is reasonably complete. It covers the purpose, resource, and return fields, but lacks behavioral details like rate limits or error handling. It's adequate for basic use but could be more comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters (query, sort, per_page). The description adds no additional parameter semantics beyond what's in the schema, such as examples or constraints not covered. Baseline 3 is appropriate as the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search GitHub repositories by keyword') and resource ('GitHub repositories'), distinguishing it from sibling tools like get_repo (fetch single repo), get_user (user info), and list_repo_issues (issue listing). It specifies the scope ('top results') and output fields, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for keyword-based repository searches but doesn't explicitly state when to use this tool versus alternatives like get_repo (for specific repos) or list_repo_issues (for issues). No exclusions or prerequisites are mentioned, leaving some ambiguity about optimal use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!