Skip to main content
Glama

Server Details

QuickBooks MCP Pack — query customers, invoices, and accounts via QuickBooks Online API.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-quickbooks
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

Most tools are distinct, but ask_pipeworx and qb_query both handle user questions about QuickBooks data, creating potential overlap. The others are clearly separated by purpose.

Naming Consistency3/5

QuickBooks tools follow a consistent 'qb_verb_noun' pattern, but the memory tools (forget, recall, remember) and utility tools (ask_pipeworx, discover_tools) use plain verbs or different conventions, creating inconsistency.

Tool Count4/5

10 tools is a reasonable count for a server that combines a QuickBooks data layer (5 tools) with memory and utility functions. It feels slightly broad but not excessive.

Completeness3/5

The QuickBooks tools cover read operations for customers, invoices, and accounts, but lack create/update/delete operations. The memory tools are complete for their purpose, but the overall set feels incomplete for managing QuickBooks data.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the tool routes to other data sources, which is a key behavioral trait. However, it doesn't mention any limitations or safety aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at 3 sentences, front-loaded with purpose, and includes actionable examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 param, no output schema), the description is nearly complete. It could mention the return type, but the examples implicitly show answers.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with only one parameter, so baseline is 3. The description adds natural language context but no additional meaning beyond 'question'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers plain English questions by routing to the best data source, with examples that distinguish it from siblings like qb_query or discover_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use plain English and provides examples, but does not mention when not to use it or compare to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool searches a catalog and returns names and descriptions, but does not explain any side effects (e.g., no changes to data), performance characteristics, or limitations beyond the limit parameter. The description is adequate but not detailed; a 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, front-loading the key action and adding context about when to use it. Each sentence adds value: purpose, output, usage guidance. A score of 4 is appropriate as it is efficient but could potentially be even tighter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (search tool, 2 params, no output schema), the description is complete: it explains purpose, output, and invocation order. It does not explain return format or pagination, but that is acceptable as the tool returns names and descriptions, and the limit parameter covers pagination. A 4 is appropriate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description need not add much. It mentions the 'query' parameter by example ('analyze housing market trends'), which adds some semantic value beyond the schema's generic 'Natural language description'. However, it does not describe the 'limit' parameter, but the schema already provides its description and default. Baseline 3 is justified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb 'search', the resource 'tool catalog', and the mechanism 'by describing what you need'. The added context 'Returns the most relevant tools with names and descriptions' further clarifies the output. This effectively distinguishes it from siblings, which are mostly QuickBooks operations or memory tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on invocation order (FIRST) and the context (500+ tools). No alternatives are mentioned, but the exclusivity is implied by the 'FIRST' directive, which is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavioral traits. It only states 'Delete' without indicating whether deletion is permanent, requires confirmation, or has side effects. No mention of authorization needs or data impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, concise, front-loaded with verb and object. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with 1 param and no output schema. Description is minimal but misses behavioral context (e.g., idempotency, error handling) that would help an agent decide to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'key' parameter. Description adds no further semantics beyond what the schema provides, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Delete' and resource 'stored memory by key'. Clearly states what the tool does. Distinguishes from sibling tools like 'recall' (read) and 'remember' (store).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives. Does not mention prerequisites or context, such as whether the key must exist beforehand.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

qb_get_customerCInspect

Retrieve a customer's complete profile including contact info, email, phone, and account balance by customer ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesQuickBooks Customer ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. It does not mention authorization requirements, rate limits, or any side effects. It only states it returns details, which is expected.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, efficient and to the point. Front-loaded with the main action. Could be slightly improved by removing 'including name, email, phone, and balance' if schema details cover that, but still concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify return format or fields. It lists some fields but not all. No guidance on error cases or what happens if ID not found. Tool is simple but description could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'id'. Description does not add extra meaning beyond the schema description 'QuickBooks Customer ID'. Baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'Get' and resource 'QuickBooks customer by ID'. It distinguishes from siblings like qb_list_invoices and qb_get_invoice. However, it does not explicitly differentiate from qb_query, which could also retrieve a customer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs qb_query or other list tools. The description implies use when you have an ID, but does not state that qb_query might be alternative for custom filtering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

qb_get_invoiceAInspect

Retrieve a complete invoice by ID including all line items, amounts, taxes, and payment history.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesQuickBooks Invoice ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must convey behavioral traits. It states 'Returns full invoice details including line items', which indicates the output is comprehensive. However, it does not mention any side effects, rate limits, or authentication requirements. For a read operation, this is acceptable but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two short sentences with no filler words. It front-loads the key action and then adds the return value detail. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single required parameter, no nested objects, no output schema) and empty annotations, the description is nearly complete. It could mention that the ID is the QuickBooks internal ID, but the schema already covers that. No major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage: the parameter 'id' is described as 'QuickBooks Invoice ID'. The description adds no further meaning beyond that. Baseline 3 is appropriate since schema already covers it adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get a single QuickBooks invoice by ID', specifying the verb ('Get'), resource ('QuickBooks invoice'), and the method of identification ('by ID'). It also distinguishes itself from siblings like qb_list_invoices (which lists invoices) by focusing on a single invoice retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need details of a specific invoice, but does not explicitly state when not to use it or mention alternatives like qb_list_invoices for multiple invoices. Context signals show a sibling tool qb_list_invoices, but no guidance is given on choosing between them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

qb_list_accountsBInspect

Get your chart of accounts with account names, types (asset/liability/equity/etc), balances, and classifications.

ParametersJSON Schema
NameRequiredDescriptionDefault
max_resultsNoMaximum number of accounts to return (default 100, max 1000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It describes a read operation ('list') with no mention of destructive side effects or auth needs. It doesn't disclose pagination behavior beyond max_results parameter, which is already in the schema. The description adds basic behavioral context but could be more thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, concise and front-loaded. It efficiently states purpose and return fields. Could be slightly more compact but is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple list with one optional parameter and no output schema, the description is adequate but not complete. It doesn't mention the default max_results value (which is in the schema but not in the description) or note that results are paginated. No output schema means description could clarify return format further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (max_results is fully described). The description adds no additional parameter meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists chart of accounts from QuickBooks and specifies the returned fields (name, type, balance, classification). However, it does not distinguish itself from sibling tools like qb_list_invoices or qb_get_customer, which are obviously different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need account listing data but provides no guidance on when not to use it or alternatives. Sibling tools like qb_query might be alternatives for custom queries, but this is not mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

qb_list_invoicesAInspect

Get recent invoices with number, customer, amount, due date, and payment status. Use qb_get_invoice for full line-item details.

ParametersJSON Schema
NameRequiredDescriptionDefault
max_resultsNoMaximum number of invoices to return (default 25, max 1000)
start_positionNoStarting position for pagination (default 1)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It indicates a read operation (list) and mentions recency ('recent'), which adds some context. However, it does not disclose any pagination behavior beyond what the schema implies, nor potential performance impacts or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that conveys the tool's purpose and return data without any filler. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description compensates by listing the fields returned. The tool has simple parameters fully described in the schema, and the description covers the core purpose and output. Some additional context about filtering or ordering could improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description does not add parameter-specific meaning beyond the schema, but it does provide context for the return fields, which indirectly helps parameter interpretation. Given full schema coverage, a 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists recent invoices from QuickBooks and enumerates the specific fields returned (invoice number, customer, amount, due date, status), making it highly specific and distinguishable from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (listing invoices) but does not explicitly differentiate from other list tools like qb_list_accounts or provide guidance on when not to use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

qb_queryAInspect

Search QuickBooks data by customer, invoice, or account using filters like name, amount, date, or status. Returns matching records with full details.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesQuickBooks SQL-like query string (e.g., "SELECT * FROM Customer MAXRESULTS 10")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It explains the tool executes queries against QBO data, which implies read behavior. However, it does not disclose potential side effects (is it read-only?), rate limits, or error handling. The examples suggest read-only, but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and immediately followed by concrete examples. No fluff, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single required parameter, 100% schema coverage, and no output schema, the description adequately explains what the tool does and how to use it. The examples cover typical use cases. However, missing info on read-only nature and error behavior prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds meaning by providing example query strings and specifying the query language (SQL-like) and common entities (Customer, Invoice). This enriches understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool runs SQL-like queries against QuickBooks Online data, and provides concrete examples. It distinguishes itself from sibling tools (like qb_get_customer, qb_get_invoice) by offering a generic querying interface.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when needing custom queries) but does not explicitly mention when not to use it or alternatives like the specific getter tools. There is no guidance on query syntax limits or best practices.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries the full burden. It states the tool retrieves a stored memory or lists all keys, which is transparent. However, it does not mention if the operation is read-only, whether it accesses persistent storage, or any side effects. With no annotations, more detail on behavioral traits would be beneficial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences. The first sentence states the core functionality, and the second provides usage context. No unnecessary words. Could be slightly more structured by front-loading the main action, but it's effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool with one optional parameter, no output schema, and no annotations, the description is fairly complete. It explains both modes of operation. However, it doesn't describe the return format (e.g., what the list of keys looks like or the structure of a retrieved memory). With no output schema, this gap could be filled.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (one parameter 'key' with description). The description adds the semantic that omitting the key lists all stored memories, which is not explicit in the schema. This adds value, but the schema already describes the parameter adequately, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It specifies the resource (previously stored memory) and the action (retrieve/list). However, it does not differentiate from the 'forget' sibling tool, but the purpose is distinct enough.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says to use this tool to retrieve context saved earlier, implying when you need previous context. It doesn't explicitly say when not to use it or mention alternatives like 'ask_pipeworx' or 'discover_tools', but the context is clear for a memory retrieval tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description explicitly covers persistence behavior (authenticated vs. anonymous) and session lifetime, which are critical behavioral traits beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, front-loaded with the core purpose, then usage guidance, then persistence details. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple key-value store with no output schema, the description covers purpose, usage, and behavioral context adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and includes examples for the key parameter; the description adds no further detail beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb-resource ('Store a key-value pair') and distinguishes from siblings like 'recall' and 'forget', clearly indicating memory management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It states when to use ('save intermediate findings, user preferences, or context across tool calls') and provides context on persistence differences between authenticated and anonymous sessions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.