Skip to main content
Glama

Server Details

GitHub Private MCP Pack — access private repos, org data via OAuth.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-github_private
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 11 of 11 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation4/5

The tools are mostly distinct: GitHub tools have clear prefixes (gh_), and memory tools (remember/recall/forget) are separate. However, 'ask_pipeworx' and 'discover_tools' overlap in purpose (both help find information), and 'ask_pipeworx' claims to pick the right tool automatically, which could conflict with manual tool selection.

Naming Consistency2/5

The naming is inconsistent: GitHub tools use 'gh_' prefix with mixed verb-noun order (e.g., 'gh_get_file', 'gh_list_issues'), but memory tools use plain verbs without prefix ('remember', 'recall', 'forget'). Additionally, 'ask_pipeworx' and 'discover_tools' break the pattern entirely.

Tool Count4/5

With 11 tools, the count is reasonable for a server that combines GitHub operations and memory management. The mix is slightly heterogeneous but not excessive.

Completeness3/5

The GitHub tools cover basic repository retrieval and listing issues/pulls, but lack write operations (create, update, delete) for issues, pulls, and files. The memory tools provide basic CRUD (create, read, delete) but miss update. The 'ask_pipeworx' tool's vague description makes it hard to assess domain coverage.

Available Tools

11 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool internally selects the best source and fills arguments, and returns results directly. This is fairly transparent, though it does not detail limitations, error handling, or data recency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, using three sentences and three examples to convey the tool's purpose and usage. No redundant words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is sufficiently complete. It explains what the tool does and how to use it, with examples. No major gaps are evident.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds significant meaning by explaining the single parameter 'question' can be any natural language request, with examples illustrating its use. This goes beyond the schema's terse description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts natural language questions and returns answers by selecting the best data source. It provides concrete examples showing the range of queries supported, leaving no ambiguity about the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly says to use this tool when you have a question in plain English, avoiding the need to browse other tools. However, it does not explicitly state when not to use it or mention alternatives for specific tasks, though the examples give good guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It correctly indicates the tool searches a catalog and returns results, implying read-only behavior. However, it does not disclose any limitations (e.g., search accuracy, indexing delays) or state that it is safe to call multiple times. A score of 3 is adequate given no contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, all essential: first states what it does, second states what it returns, third states when to use it. No wasted words. Front-loaded with the key action ('search').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, 100% schema coverage, no output schema needed for a search result tool), the description is complete. It covers purpose, usage timing, and return value. No additional information is necessary for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already explains both parameters. The description adds no additional semantic value beyond what the schema provides (e.g., default and max for limit, example queries for query). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('search', 'returns') and a clear resource ('Pipeworx tool catalog'). It distinguishes itself from siblings by explicitly stating its role: find relevant tools among 500+ when you don't know which one to use. No sibling duplicates this purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent when to use it (first, when many tools) and implies alternatives (siblings are for specific tasks). Clear and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. The description is minimal: it does not disclose if deletion is permanent, what happens on missing key (error vs silent fail), or whether it requires authentication. This is insufficient for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no fluff. Every word earns its place. Front-loaded with the action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a destructive operation with no annotations or output schema, the description is too minimal. It should mention behavior on non-existent keys, side effects, and return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with description for 'key'. The tool description adds 'by key' which confirms the role, but doesn't add significant new meaning beyond the schema's 'Memory key to delete'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Delete a stored memory by key' clearly states the verb (delete) and resource (stored memory). It is specific and distinct from sibling tools like 'remember' (store) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'recall' or 'remember'. It does not mention prerequisites or conditions for deletion, such as the key existing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gh_get_fileBInspect

Get file contents from a repository. Specify owner, repo name, and file path (e.g., 'README.md'). Returns raw content and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
refNoBranch or commit SHA (default: default branch)
pathYesFile path (e.g., "src/index.ts")
repoYesRepository name
ownerYesRepository owner
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description is minimal, but annotations are empty, so it carries the burden. It states 'Get file contents', implying a read operation, but doesn't disclose any behavioral traits like whether it returns raw text or base64, or any error handling. It's adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence that directly states the tool's function. No unnecessary words, perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (4 params, no output schema), the description is arguably complete enough. However, it could mention that it returns file contents (e.g., raw or base64) to help the agent understand the output format. Adequate but not excellent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no parameter-specific info beyond the schema, which already describes owner, repo, path, and ref. No extra value from description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves file contents from a repository, which is a specific action on a resource. However, it doesn't differentiate from sibling tools like gh_get_repo (which gets repository metadata) or others, so it loses a point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention that this tool is for file content, while gh_get_repo is for repository details, leaving the agent to infer from names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gh_get_repoBInspect

Get detailed info for a specific repository. Returns description, language, stars, forks, open issues, default branch, and access level.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name
ownerYesRepository owner
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the burden. It states that the tool works for private repos if the agent has access, which is a behavioral trait. However, it does not disclose other traits like rate limits, pagination, or response size.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that adds value by specifying private repo access. No wasted words, but could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters and no output schema, the description is adequate but not complete. It lacks details about response format or common use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add extra meaning beyond the schema; it only restates the tool's purpose. No additional parameter details provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gets repository details and works for private repos. The verb 'get' and resource 'repository details' are specific. However, it does not distinguish from sibling tools like gh_get_file, but the name already clarifies the resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it works for private repos the agent has access to, implying when to use. But no explicit guidance on when not to use or alternatives among siblings, though sibling names hint at different resources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gh_list_issuesBInspect

List issues in a repository. Specify owner and repo name (e.g., owner='octocat', repo='Hello-World'). Returns titles, numbers, status, assignees, and labels.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name
ownerYesRepository owner
stateNoFilter: open, closed, all (default: open)
per_pageNoResults per page (max 100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It does not disclose behavioral traits like pagination (per_page parameter), filtering (state), or read-only nature. The schema covers parameters but not behavior. However, the tool is simple and 'list' implies read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded. No wasted words, but it could benefit from a bit more context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is straightforward (list issues with filtering/pagination), the description is minimally complete. No output schema, but the return type is implied. Lacks mention of default state filter (open).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond what the schema provides. For a simple list tool, this is adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'List issues for a repository' which is a specific verb+resource. However, it does not differentiate from sibling tool 'gh_list_pulls' which also lists items for a repository, so it loses a point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies basic usage (list issues for a repo) but provides no guidance on when to use this vs alternatives like gh_list_pulls, or mention of required parameters (owner, repo) which are already in the schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gh_list_orgsCInspect

List organizations you're a member of. Returns org names, URLs, and your role (owner, member, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It only states the action, but does not disclose any behavioral traits like pagination, rate limits, or whether it returns public or private organizations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no waste, front-loaded with the action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is too minimal. It could mention the return format or authentication context to aid the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with 0 parameters, so baseline is 3. Description does not need to add parameter info, but could mention default behavior or context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'list' and resource 'organizations you belong to'. It distinguishes from siblings like gh_list_repos and gh_list_issues by specifying the resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., authentication) or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gh_list_pullsBInspect

List pull requests in a repository. Specify owner and repo name (e.g., owner='octocat', repo='Hello-World'). Returns titles, numbers, status, reviewers, and merge state.

ParametersJSON Schema
NameRequiredDescriptionDefault
repoYesRepository name
ownerYesRepository owner
stateNoFilter: open, closed, all (default: open)
per_pageNoResults per page (max 100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries burden. It does not mention that it only lists pulls, not issues, or that it returns paginated results. However, the schema clarifies state and per_page, somewhat compensating.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Could benefit from a second sentence about scope or filtering.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 4 params, the description is adequate but minimal. It does not explain return format or pagination. With good schema coverage, it's minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds no additional param info beyond the schema. The description's only mention is the general purpose, not parameter specifics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and resource 'pull requests for a repository'. It distinguishes from siblings like gh_list_issues and gh_list_repos by specifying 'pull requests'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like gh_list_issues or search tools. Does not mention that it lists for a specific repo, which is implicit from required params owner and repo.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gh_list_reposAInspect

List all your repositories including private ones. Returns repo names, URLs, descriptions, language, stars, and last update time.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort by: created, updated, pushed, full_name
per_pageNoResults per page (max 100)
visibilityNoFilter: all, public, private (default: all)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It mentions private repos are included, which is helpful for auth context, but doesn't disclose pagination behavior, rate limits, or return structure. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, zero waste. Front-loaded with verb and resource, parenthetical clarifies private repos. Perfectly concise for the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple list operation, description is sufficient for basic understanding. However, no mention of return format, ordering default, or pagination behavior. Adequate for a straightforward list but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add additional meaning beyond schema; it's a generic list. With 3 parameters already well-documented in schema, no extra value needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists repositories including private ones, with the verb 'list' and resource 'repositories'. It distinguishes itself from siblings like 'gh_get_repo' (single repo) and 'gh_list_orgs' (orgs), providing clarity on scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance is provided. The description implies it's for listing user's own repos, but doesn't address alternatives like 'gh_list_orgs' for org repos or filtering via visibility. Usage context is only implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses the dual behavior (retrieve vs list) and the cross-session persistence. Without annotations, this is sufficient for safe use. Could mention that the tool is read-only and non-destructive, but the description implies no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action and resource. No fluff, every part is informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with one optional parameter and no output schema, the description is complete. It explains both modes and the persistence scope. Lacks only mention of return format, but output schema absent so not expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single optional parameter. Description adds the behavior for omitting key (list all), which enriches understanding beyond the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves a memory by key or lists all memories when key is omitted. Distinguishes itself from sibling tools like 'remember' (which stores) and 'forget' (which deletes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (retrieve context saved earlier) and provides the key omission behavior for listing all. No exclusion needed; it's a straightforward retrieval tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses persistence behavior (persistent vs. 24-hour TTL), but does not mention memory limits, overwrite behavior, or any side effects. No contradiction since no annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences, front-loaded with the core purpose, and every sentence adds value: function, when to use, and persistence details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple parameters, the description is nearly complete. It covers purpose, usage, and behavioral aspects. Minor gap: no mention of memory capacity or overwrite behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and descriptions already define key and value well. The description adds use case examples for key (e.g., "subject_property") and value (any text), which adds moderate value but is not critical.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, specifying verb (store), resource (session memory), and purpose (save intermediate findings, user preferences, or context). It distinguishes from sibling tools like recall (which retrieves) and forget (which deletes).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use (to save context across calls) and mentions session persistence differences between authenticated and anonymous users. However, it does not explicitly state when not to use or name alternatives like storing externally.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.