Skip to main content
Glama

Server Details

PayPal MCP Pack — read-only access to PayPal transactions, orders, invoices, and disputes.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-paypal
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

The PayPal-specific tools (paypal_get_invoice, paypal_get_order, paypal_list_disputes, etc.) are clearly distinct, but the Pipeworx tools (ask_pipeworx, discover_tools) and memory tools (forget, recall, remember) belong to a different domain. The 'ask_pipeworx' tool claims to 'pick the right tool' and fill arguments, which could overlap with discover_tools and the PayPal tools themselves, creating potential ambiguity.

Naming Consistency3/5

PayPal tools follow a consistent paypal_verb_noun pattern, but Pipeworx tools use different naming (ask_pipeworx, discover_tools) and memory tools use single verbs (forget, recall, remember). This mix of patterns reduces consistency.

Tool Count4/5

10 tools is reasonable for a server that combines a generic query engine with PayPal-specific operations. The count is well-scoped, though the inclusion of memory tools (3 tools) and Pipeworx tools (2 tools) alongside 5 PayPal tools feels slightly broad but not excessive.

Completeness2/5

For PayPal functionality, only invoices, orders, disputes, and transactions are covered—missing key operations like creating payments, refunds, or managing subscriptions. The Pipeworx side is vague and lacks specific tools beyond discovery and a catch-all query. Memory tools are complete but trivial. The set feels incomplete for a comprehensive PayPal integration.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that Pipeworx selects the best tool and fills arguments, implying autonomous decision-making. However, it does not mention limitations, potential errors, or what happens if no tool can answer. The description is transparent but could be more detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences) and front-loaded with the core purpose. The examples add helpful context without being verbose. A slight improvement would be to separate examples more clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (it acts as a meta-tool), the description is reasonably complete. It explains its role and provides examples. However, without an output schema, it could mention the format of the answer or potential outcomes (e.g., success, error). The description leaves some uncertainty about what the agent should expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has one parameter 'question' with 100% description coverage. The description adds value by clarifying that the question should be in natural language and providing examples of valid queries, going beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to answer natural language questions using the best available data source. It differentiates itself from siblings by emphasizing its role as a 'concierge' that selects the right tool and fills arguments, contrasting with specific tools like paypal_get_invoice. However, it could more explicitly name sibling tools it might invoke.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidance: ask in plain English without needing to browse tools or learn schemas. It gives three examples illustrating suitable queries. It does not explicitly state when not to use it or mention alternatives, but the context of sibling tools suggests specialized tools exist for specific tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's behavior (returns relevant tools with names and descriptions) but does not mention potential limitations like rate limits, token costs, or whether the search is based on embeddings or keyword matching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, two sentences that are front-loaded with the core action and key usage directive. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is nearly complete. It could optionally mention the return format (e.g., list of tool names and descriptions), but that is already implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining that the 'query' parameter should be a natural language description, and the 'limit' parameter has a default and max value, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to search the Pipeworx tool catalog by describing a need, and it distinguishes itself by directing the agent to call it first when there are 500+ tools available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool ('call this FIRST when you have 500+ tools available'), providing clear guidance on its primary use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry the burden. It clearly indicates the tool is destructive (deletes data), which is essential behavioral info. However, it does not disclose what happens if the key doesn't exist, whether deletion is irreversible, or any authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no waste. Could be more structured but achieves clarity in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is adequate but lacks context about side effects, error handling, or prerequisites. It covers the basic purpose but leaves some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description of the 'key' parameter in the schema already explains it. The tool description does not add new semantics beyond 'by key', but given high coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Delete a stored memory by key' clearly states the action (delete) and the resource (memory). It distinguishes from sibling tools like 'recall' and 'remember' by specifying deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. No mention of prerequisites or scenarios where deletion is appropriate. The description is minimal.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

paypal_get_invoiceBInspect

Get full details of a PayPal invoice by ID. Returns line items, amounts, due dates, and payment status.

ParametersJSON Schema
NameRequiredDescriptionDefault
_sandboxNoUse sandbox environment (default: false)
_clientIdYesPayPal app Client ID
invoice_idYesPayPal invoice ID (e.g., INV2-XXXX-XXXX-XXXX-XXXX)
_clientSecretYesPayPal app Client Secret
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description indicates it is a read operation (get details). Annotations are empty, so description carries burden. It doesn't mention any side effects, authentication requirements beyond schema, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise and front-loaded with key action. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given tool is simple (get by ID), description is adequate. No output schema, but return type is implied by purpose. Could mention that invoice_id format is an example.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so schema already documents parameters. Description adds no extra meaning beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states verb 'get details' and resource 'PayPal invoice by its ID', which is clear. However, it does not differentiate from sibling tools like paypal_get_order or paypal_list_invoices.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Siblings include paypal_get_order (for orders) and paypal_list_invoices (for listing), but description does not clarify.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

paypal_get_orderAInspect

Get full details of a PayPal order by ID (e.g., "3JU84394D694620H"). Returns buyer info, items, amounts, and fulfillment status.

ParametersJSON Schema
NameRequiredDescriptionDefault
_sandboxNoUse sandbox environment (default: false)
order_idYesPayPal order ID
_clientIdYesPayPal app Client ID
_clientSecretYesPayPal app Client Secret
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It accurately states it's a read operation ('Get details'), but lacks details like auth requirements (client ID/secret are in schema), rate limits, or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise and front-loaded with the key action. No wasted words, but could be slightly improved by adding minimal context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with full schema coverage and no output schema, the description is minimally complete. However, no guidance on prerequisites (e.g., sandbox vs live) or response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in schema. Description does not add extra meaning beyond what the schema already provides for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get details') and resource ('PayPal order by its ID'), clearly distinguishing it from sibling tools like paypal_get_invoice or paypal_list_invoices.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (when needing order details) but provides no explicit when-not or alternatives guidance. Sibling tools like paypal_list_transactions or paypal_list_disputes exist but are not mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

paypal_list_disputesCInspect

List chargebacks and claims against your account. Returns dispute IDs, amounts, statuses, and reasons.

ParametersJSON Schema
NameRequiredDescriptionDefault
_sandboxNoUse sandbox environment (default: false)
_clientIdYesPayPal app Client ID
_clientSecretYesPayPal app Client Secret
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits, but it only says 'list disputes' without mentioning read-only nature, pagination, rate limits, or authentication requirements beyond the schema. The tool requires credentials but the description doesn't clarify that listing disputes is a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that effectively communicates the tool's purpose. It is front-loaded with the key action and resource. No unnecessary words, though it could include more detail without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool lists disputes (which can involve pagination, filtering, statuses), the description is too sparse. It does not mention output format, sorting, filtering by date/status, or any additional parameters beyond auth. Without an output schema, the description should cover what the response contains.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema; it omits explaining that _sandbox, _clientId, _clientSecret are authentication parameters. However, the schema descriptions are self-sufficient, so no additional value is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists disputes (chargebacks and claims) from PayPal, with specific verb 'list' and resource 'disputes'. It distinguishes from sibling tools like paypal_list_invoices and paypal_list_transactions by focusing on disputes, though it doesn't explicitly differentiate from all siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like paypal_list_transactions or paypal_list_invoices. It does not mention prerequisites, filtering options, or when not to use it, leaving the agent without decision-making context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

paypal_list_invoicesBInspect

List your PayPal invoices. Returns invoice numbers, amounts, statuses, and dates. Use to track billing and outstanding payments.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (default 1)
_sandboxNoUse sandbox environment (default: false)
_clientIdYesPayPal app Client ID
page_sizeNoResults per page (default 20, max 100)
_clientSecretYesPayPal app Client Secret
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description carries the burden. It reveals the tool is a read operation (list) and mentions pagination indirectly via 'page' and 'page_size' parameters, but it does not disclose rate limits, authentication flows, or any side effects. The description adds some value beyond the schema by summarizing the return fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two sentences clearly stating purpose and return data. It is front-loaded with the primary action. No superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a list tool with 5 parameters and no output schema, the description covers the high-level return data but does not explain pagination behavior, default sorting, or how to handle errors. It is adequate but could be more complete with additional behavioral notes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters have descriptions. The tool description adds no extra parameter context beyond the schema, so it meets the baseline of 3. No parameters are explained further in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists invoices from PayPal and specifies what data is returned (invoice numbers, amounts, statuses). It effectively distinguishes itself from sibling tools like paypal_get_invoice and paypal_list_transactions, though it could be more precise about the scope (e.g., all invoices or filtered).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this is the tool for listing invoices, but it does not provide explicit guidance on when to use it versus alternatives like paypal_list_transactions. It also lacks context on prerequisites (e.g., client credentials are required, as per schema).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

paypal_list_transactionsBInspect

Find PayPal transactions within a date range. Returns amount, status, payer info, and transaction IDs. Use to audit payments or track cash flow.

ParametersJSON Schema
NameRequiredDescriptionDefault
_sandboxNoUse sandbox environment (default: false)
end_dateYesEnd date in ISO 8601 format (e.g., 2024-12-31T23:59:59Z)
_clientIdYesPayPal app Client ID
start_dateYesStart date in ISO 8601 format (e.g., 2024-01-01T00:00:00Z)
_clientSecretYesPayPal app Client Secret
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must cover behavioral traits. It mentions returning transaction details but does not disclose authentication requirements (though _clientId and _clientSecret hint at it), rate limits, pagination, or whether the operation is read-only. The description is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at two sentences, front-loading the core purpose. It is efficient with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 5 parameters, no output schema, and empty annotations. The description provides basic purpose and return fields but lacks details on output structure, error handling, or usage context. It is minimally complete but could be more helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. The description adds no extra parameter information beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists PayPal transactions within a date range and specifies the returned fields (amount, status, payer info). It is specific about the resource (transactions) and action (list), but does not differentiate from sibling tools like paypal_list_invoices or paypal_list_disputes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (listing transactions in a date range) but provides no guidance on when not to use it or alternatives. For instance, it does not mention that paypal_list_invoices might be more appropriate for invoices.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the read operation well but does not mention potential side effects, persistence guarantees, or performance implications. For a simple retrieval tool, this is adequate but not exemplary.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the key behavior. Every word adds value, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has a simple interface (1 optional param, no output schema). The description covers the two usage modes and mentions persistence across sessions. A bit more detail on the return format could be useful, but it's complete enough for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter. The description adds the nuance that omitting the key lists all memories, which aligns with the schema's optionality. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: retrieve a memory by key, or list all memories when key is omitted. It distinguishes itself from 'remember' (store) and 'forget' (delete) through context and sibling tool names.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use it (retrieve context saved earlier) and implies when not to (omit key for listing). It does not explicitly exclude alternatives, but the context is clear given the sibling tool names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses persistence behavior based on authentication status, which is a key behavioral trait beyond what annotations would provide (none provided). No contradictions with annotations. It could also mention that overwriting existing keys is allowed, but the current detail is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, concise and front-loaded with the primary purpose. The second sentence adds important context about persistence. It wastes no words. Could be slightly more structured but effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (2 parameters, no output schema, no nested objects), the description covers the essential aspects: purpose, use cases, and persistence behavior. It is complete enough for an agent to use correctly. The lack of output schema information is acceptable since the schema provides none and the tool likely returns a success message.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides 100% coverage with clear descriptions for both 'key' and 'value'. The description adds no additional parameter semantics beyond the schema, so a baseline of 3 is appropriate. The examples in the schema's 'key' description ('subject_property', 'target_ticker') are helpful but are part of the schema, not the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory. It specifies the action ('store'), the resource ('key-value pair in session memory'), and the use cases ('save intermediate findings, user preferences, or context across tool calls'). This distinguishes it from siblings like 'forget' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool ('save intermediate findings, user preferences, or context across tool calls') and provides context on persistence ('Authenticated users get persistent memory; anonymous sessions last 24 hours'). However, it does not explicitly state when not to use it or mention alternatives (e.g., 'recall' for retrieval, 'forget' for deletion).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.