Skip to main content
Glama

github

Server Details

GitHub MCP — wraps the GitHub public REST API (no auth required for public endpoints)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-github
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_repo retrieves repository details, get_user fetches user profiles, list_repo_issues handles issue listings, and search_repos performs repository searches. There is no overlap in functionality, making tool selection straightforward for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_repo, get_user, list_repo_issues, search_repos) using snake_case. This uniformity enhances readability and predictability, with no deviations in naming conventions.

Tool Count3/5

With only 4 tools, the server feels somewhat thin for a GitHub integration, lacking common operations like creating or updating repositories, issues, or pull requests. While the tools are useful, the scope is limited compared to typical GitHub API coverage.

Completeness2/5

The tool set is severely incomplete for GitHub operations, missing essential CRUD actions such as create_repo, update_issue, or create_pull_request. This will likely cause agent failures when attempting full workflows, as it only supports read-only queries and searches.

Available Tools

4 tools
get_repoAInspect

Get full details for a specific GitHub repository by owner and repo name. Returns description, stars, forks, language, topics, license, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name, e.g. "react"
ownerYesRepository owner (user or org), e.g. "facebook"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get') and lists example return fields, but does not cover aspects like rate limits, authentication needs, error handling, or pagination. The description adds basic context but lacks depth for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the action and parameters, and the second lists example return data. Every sentence adds value without redundancy, and it is front-loaded with the core purpose. No wasted words or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic purpose and parameter context but lacks completeness. It does not explain the full return structure, error cases, or behavioral traits like rate limits. For a read tool with 2 parameters, this is adequate but has clear gaps in operational guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation of both required parameters (owner and repo) including examples. The description adds minimal value beyond the schema by mentioning these parameters in context ('by owner and repo name'), but does not provide additional syntax, format details, or constraints. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details'), resource ('GitHub repository'), and scope ('by owner and repo name'), distinguishing it from siblings like get_user (user-focused) and list_repo_issues/issues-focused). It provides concrete examples of returned data like stars and language, making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving detailed repository information when owner and repo name are known, but does not explicitly state when to use this tool versus alternatives like search_repos (for broader searches) or get_user (for user data). No exclusions or prerequisites are mentioned, leaving some ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_userCInspect

Get the public profile of a GitHub user. Returns login, name, bio, company, location, public repos count, followers, and more.

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username, e.g. "torvalds"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns public profile data, implying a read-only operation, but doesn't specify authentication needs, rate limits, error conditions, or whether it's safe for repeated use. This leaves significant gaps in understanding the tool's behavior beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, starting with the core purpose in the first sentence. The second sentence efficiently lists key return fields without unnecessary elaboration. There's minimal waste, though it could be slightly more structured by explicitly separating purpose from output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and output fields but lacks behavioral context like authentication or error handling. Without annotations or an output schema, more detail on usage and limitations would improve completeness for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'username' parameter clearly documented. The description doesn't add any parameter-specific details beyond what the schema provides, such as format constraints or examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the public profile of a GitHub user.' It specifies the verb ('Get') and resource ('public profile of a GitHub user'), making the action and target explicit. However, it doesn't distinguish this tool from potential siblings like 'get_repo' beyond the resource type, which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions what data is returned but offers no context on prerequisites, limitations, or comparisons to sibling tools like 'get_repo' or 'search_repos'. This lack of usage context leaves the agent without clear direction for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_repo_issuesCInspect

List issues for a GitHub repository. Returns title, number, state, labels, and created_at for each issue.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name
ownerYesRepository owner (user or org)
stateNoFilter by issue state: open, closed, or all (default: open)
per_pageNoNumber of issues to return (default 10, max 30)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return fields (title, number, state, labels, created_at) but fails to cover critical aspects like pagination behavior (implied by 'per_page' parameter), rate limits, authentication needs, or error handling, leaving significant gaps for a tool that interacts with an external API like GitHub.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and key return fields, with no wasted words. It is appropriately sized for the tool's complexity, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (interacting with GitHub API, 4 parameters, no output schema), the description is incomplete. It lacks details on output structure beyond listed fields, pagination, error cases, or API constraints, which are crucial for effective use without annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all parameters clearly. The description adds no additional meaning beyond the schema, such as explaining parameter interactions or usage examples, so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('issues for a GitHub repository'), making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'get_repo' or 'search_repos', which might also involve repository data, so it misses full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'search_repos' or 'get_repo', nor does it mention any prerequisites or exclusions. It lacks explicit context for tool selection, leaving usage implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_reposAInspect

Search GitHub repositories by keyword. Returns name, full_name, description, stars, forks, language, and URL for the top results.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort results by: stars, forks, or updated (default: stars)
queryYesSearch query string (e.g., "react hooks", "cli tool language:go")
per_pageNoNumber of results to return (default 10, max 30)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return format (name, full_name, etc.) and scope ('top results'), which adds value beyond the schema. However, it lacks details on rate limits, authentication needs, pagination, or error handling, which are important for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and resource, followed by key return details. Every word earns its place with no redundancy or unnecessary elaboration, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with 3 parameters) and no annotations or output schema, the description is reasonably complete. It covers the purpose, resource, and return fields, but lacks behavioral details like rate limits or error handling. It's adequate for basic use but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (query, sort, per_page). The description adds no additional parameter semantics beyond what's in the schema, such as examples or constraints not covered. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search GitHub repositories by keyword') and resource ('GitHub repositories'), distinguishing it from sibling tools like get_repo (fetch single repo), get_user (user info), and list_repo_issues (issue listing). It specifies the scope ('top results') and output fields, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for keyword-based repository searches but doesn't explicitly state when to use this tool versus alternatives like get_repo (for specific repos) or list_repo_issues (for issues). No exclusions or prerequisites are mentioned, leaving some ambiguity about optimal use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.