Skip to main content
Glama

Server Details

GitHub MCP — wraps the GitHub public REST API (no auth required for public endpoints)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-github
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

Most tools have distinct purposes (e.g., get_repo vs. search_repos, remember vs. recall), but discover_tools overlaps conceptually with search_repos as both involve searching, which could cause confusion. The memory tools (remember, recall, forget) are clearly separate from GitHub-specific tools, but the mix of domains (GitHub operations and memory management) creates some ambiguity in the overall set.

Naming Consistency3/5

The naming is mixed: some tools follow a verb_noun pattern (e.g., get_repo, list_repo_issues, search_repos), while others use single verbs (e.g., forget, recall, remember) or a different style (discover_tools). This inconsistency makes the set less predictable, though the names are still readable and not chaotic.

Tool Count4/5

With 8 tools, the count is reasonable for a server named 'github' that includes both GitHub operations and memory management. It's slightly over-scoped due to the mixed domains, but each tool has a clear role, and the number is manageable without being overwhelming.

Completeness2/5

For GitHub operations, there are significant gaps: tools cover reading (get_repo, get_user, list_repo_issues, search_repos) but lack create, update, or delete operations (e.g., no create_issue, update_repo). The memory tools provide basic CRUD (remember, recall, forget), but the overall surface is incomplete for typical GitHub workflows, likely causing agent failures in interactive tasks.

Available Tools

9 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it picks the right tool and fills arguments automatically, handles natural language input, and returns results. However, it lacks details on limitations (e.g., rate limits, error handling, or data source constraints), which prevents a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality, followed by practical benefits and concrete examples. Every sentence adds value: the first defines the purpose, the second explains the automation, and the third provides usage examples. It's efficiently structured without wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language querying with automated tool selection) and lack of annotations or output schema, the description does well by explaining the process and providing examples. However, it doesn't cover potential outputs or error cases, leaving some gaps in contextual understanding for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single 'question' parameter as 'Your question or request in natural language.' The description adds minimal value beyond this by reiterating 'plain English' and providing examples, but doesn't explain parameter nuances like length limits or formatting. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask') and resource ('answer from data source'), and distinguishes itself from siblings by emphasizing natural language interaction without needing to browse tools or learn schemas. The examples further clarify the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' This provides clear guidance to use ask_pipeworx for natural language queries instead of other tools that might require specific parameters or schemas, effectively differentiating it from siblings like discover_tools or search_repos.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'most relevant tools' (implying ranking/ relevance scoring) and has a default/max limit (implied by the schema), but doesn't mention behavioral aspects like rate limits, authentication needs, or error handling. It adds some context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the second provides crucial usage guidelines. Every sentence earns its place with no wasted words, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with 2 parameters), no annotations, and no output schema, the description is reasonably complete. It covers purpose, usage context, and high-level behavior, but could benefit from more details on output format or error cases to be fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('query' and 'limit') thoroughly. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain how the query is processed or the impact of the limit). Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and distinguishes it from siblings by specifying it's for discovering tools rather than directly accessing repositories or users. It explicitly mentions returning 'most relevant tools with names and descriptions'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context about when to use it (large tool catalog, discovery phase) and implies alternatives (direct tool calls) when the catalog is smaller or tools are already known.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While 'Delete' implies a destructive mutation, it doesn't disclose whether this operation is reversible, what permissions are required, whether there are confirmation prompts, or what happens on success/failure. For a destructive tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core purpose without any wasted words. It's appropriately sized for a simple tool and front-loads the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive mutation tool with no annotations and no output schema, the description is incomplete. It doesn't address behavioral aspects like irreversibility, error conditions, or response format, leaving significant gaps for an AI agent to understand how to use this tool safely and effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the 'key' parameter is fully documented in the schema), so the baseline is 3. The description adds no additional parameter information beyond what's already in the schema, maintaining this baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential siblings like 'recall' or 'remember' that might also interact with stored memories, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'recall' (likely to retrieve memories) and 'remember' (likely to store memories), there's no indication of when deletion is appropriate versus retrieval or storage operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_repoAInspect

Get full details for a specific repository. Returns description, stars, forks, language, topics, license, and more. Specify owner and repo name (e.g., owner="torvalds", repo="linux").

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name, e.g. "react"
ownerYesRepository owner (user or org), e.g. "facebook"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates this is a read operation ('Get') and lists example return fields, but does not cover aspects like rate limits, authentication needs, error handling, or pagination. The description adds basic context but lacks depth for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the action and parameters, and the second lists example return data. Every sentence adds value without redundancy, and it is front-loaded with the core purpose. No wasted words or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic purpose and parameter context but lacks completeness. It does not explain the full return structure, error cases, or behavioral traits like rate limits. For a read tool with 2 parameters, this is adequate but has clear gaps in operational guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation of both required parameters (owner and repo) including examples. The description adds minimal value beyond the schema by mentioning these parameters in context ('by owner and repo name'), but does not provide additional syntax, format details, or constraints. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details'), resource ('GitHub repository'), and scope ('by owner and repo name'), distinguishing it from siblings like get_user (user-focused) and list_repo_issues/issues-focused). It provides concrete examples of returned data like stars and language, making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving detailed repository information when owner and repo name are known, but does not explicitly state when to use this tool versus alternatives like search_repos (for broader searches) or get_user (for user data). No exclusions or prerequisites are mentioned, leaving some ambiguity in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_userCInspect

Get a GitHub user's public profile info. Returns name, bio, company, location, public repo count, followers, and social links. Specify username (e.g., username="torvalds").

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesGitHub username, e.g. "torvalds"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns public profile data, implying a read-only operation, but doesn't specify authentication needs, rate limits, error conditions, or whether it's safe for repeated use. This leaves significant gaps in understanding the tool's behavior beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, starting with the core purpose in the first sentence. The second sentence efficiently lists key return fields without unnecessary elaboration. There's minimal waste, though it could be slightly more structured by explicitly separating purpose from output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and output fields but lacks behavioral context like authentication or error handling. Without annotations or an output schema, more detail on usage and limitations would improve completeness for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'username' parameter clearly documented. The description doesn't add any parameter-specific details beyond what the schema provides, such as format constraints or examples. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the public profile of a GitHub user.' It specifies the verb ('Get') and resource ('public profile of a GitHub user'), making the action and target explicit. However, it doesn't distinguish this tool from potential siblings like 'get_repo' beyond the resource type, which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions what data is returned but offers no context on prerequisites, limitations, or comparisons to sibling tools like 'get_repo' or 'search_repos'. This lack of usage context leaves the agent without clear direction for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_repo_issuesCInspect

List issues for a repository to track bugs and features. Returns issue title, number, state (open/closed), labels, and creation date. Specify owner and repo name (e.g., owner="torvalds", repo="linux").

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name
ownerYesRepository owner (user or org)
stateNoFilter by issue state: open, closed, or all (default: open)
per_pageNoNumber of issues to return (default 10, max 30)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return fields (title, number, state, labels, created_at) but fails to cover critical aspects like pagination behavior (implied by 'per_page' parameter), rate limits, authentication needs, or error handling, leaving significant gaps for a tool that interacts with an external API like GitHub.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and key return fields, with no wasted words. It is appropriately sized for the tool's complexity, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (interacting with GitHub API, 4 parameters, no output schema), the description is incomplete. It lacks details on output structure beyond listed fields, pagination, error cases, or API constraints, which are crucial for effective use without annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all parameters clearly. The description adds no additional meaning beyond the schema, such as explaining parameter interactions or usage examples, so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('issues for a GitHub repository'), making the purpose specific and understandable. However, it does not explicitly differentiate from sibling tools like 'get_repo' or 'search_repos', which might also involve repository data, so it misses full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'search_repos' or 'get_repo', nor does it mention any prerequisites or exclusions. It lacks explicit context for tool selection, leaving usage implied at best.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool can retrieve individual memories or list all, works across sessions, and accesses previously stored data. However, it doesn't mention error handling, performance characteristics, or data format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first explains the dual functionality, the second provides usage context. Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single optional parameter, no output schema, no annotations), the description is nearly complete. It explains what the tool does, when to use it, and parameter behavior. The main gap is lack of information about return format or error cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds meaningful context by explaining that omitting the key parameter triggers listing all memories, which clarifies the optional parameter's semantic effect beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings by mentioning 'context you saved earlier' which relates to the 'remember' sibling tool, establishing a clear relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool ('retrieve context you saved earlier in the session or in previous sessions') and when to omit parameters ('omit key to list all keys'). It distinguishes from alternatives by referencing 'remember' as the complementary save operation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a write operation (implied by 'store'), has authentication-dependent persistence (authenticated vs. anonymous), and specifies session scope. However, it doesn't mention potential limitations like storage limits or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded: the first sentence states the core purpose, the second provides usage context, and the third adds important behavioral detail. Every sentence earns its place with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter write tool with no annotations and no output schema, the description does well by explaining purpose, usage context, and persistence behavior. It could be more complete by mentioning what happens on duplicate keys or storage limits, but covers the essential context given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain key constraints or value formatting). Baseline 3 is appropriate when the schema does all the work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('store') and resource ('key-value pair in your session memory'), and distinguishes it from sibling tools like 'recall' (which likely retrieves) and 'forget' (which likely deletes). It explicitly mentions what gets stored and where.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('save intermediate findings, user preferences, or context across tool calls') and includes important context about authentication differences (persistent vs. 24-hour memory), which helps distinguish it from alternatives like 'recall' for retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_reposAInspect

Search GitHub repositories by keyword. Returns repo name, description, star count, forks, primary language, and URL. Use when exploring projects or finding code implementations.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort results by: stars, forks, or updated (default: stars)
queryYesSearch query string (e.g., "react hooks", "cli tool language:go")
per_pageNoNumber of results to return (default 10, max 30)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return format (name, full_name, etc.) and scope ('top results'), which adds value beyond the schema. However, it lacks details on rate limits, authentication needs, pagination, or error handling, which are important for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and resource, followed by key return details. Every word earns its place with no redundancy or unnecessary elaboration, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with 3 parameters) and no annotations or output schema, the description is reasonably complete. It covers the purpose, resource, and return fields, but lacks behavioral details like rate limits or error handling. It's adequate for basic use but could be more comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (query, sort, per_page). The description adds no additional parameter semantics beyond what's in the schema, such as examples or constraints not covered. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search GitHub repositories by keyword') and resource ('GitHub repositories'), distinguishing it from sibling tools like get_repo (fetch single repo), get_user (user info), and list_repo_issues (issue listing). It specifies the scope ('top results') and output fields, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for keyword-based repository searches but doesn't explicitly state when to use this tool versus alternatives like get_repo (for specific repos) or list_repo_issues (for issues). No exclusions or prerequisites are mentioned, leaving some ambiguity about optimal use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.