Skip to main content
Glama

Server Details

CFPB MCP — Consumer Financial Protection Bureau complaint database (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-cfpb
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 10 of 10 tools scored. Lowest: 3.1/5.

Server CoherenceB
Disambiguation3/5

The tool set mixes general-purpose tools (ask_pipeworx, discover_tools, memory tools) with CFPB-specific complaint tools. The CFPB tools are distinct, but ask_pipeworx overlaps with them since it can also answer CFPB queries, and discover_tools is a meta-tool for searching the catalog. This creates some ambiguity about which tool to use for a given task.

Naming Consistency3/5

CFPB-specific tools follow a consistent 'cfpb_' prefix with descriptive names (e.g., cfpb_company_complaints). However, memory tools (forget, recall, remember) and meta-tools (ask_pipeworx, discover_tools) use plain verbs without a prefix, breaking the pattern. The mix of styles is noticeable.

Tool Count3/5

With 10 tools, the count is reasonable but the set feels unbalanced. There are many CFPB-specific tools (5) and several general-purpose tools (ask_pipeworx, discover_tools, memory tools) that are not directly related to the CFPB domain. The server seems to serve dual purposes, making the count appropriate but the scope unclear.

Completeness4/5

The CFPB tools cover the main complaint operations: search, get by ID, company complaints, product breakdown, and top companies. This provides good coverage for consumer complaint data. However, there is no tool for submitting complaints or managing company information. The memory tools add extra functionality but are not part of the core domain.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool picks the best data source and fills arguments, indicating autonomous behavior. No annotations provided, so the description carries the full burden; it could mention that the tool may call other tools internally or have latency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, well-structured, and front-loaded with the purpose. Every sentence adds value, including examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention the format of the answer or potential limitations. However, it is sufficient for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single 'question' parameter. The description adds value by explaining how to use it (plain English) and providing examples, going beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers questions in plain English by selecting the best data source, filling arguments, and returning results. This distinguishes it from sibling tools that are specific to CFPB complaints or memory operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'No need to browse tools or learn schemas' and provides examples, making it clear when to use this tool (for any question) and when not to (it's a one-stop answer tool).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cfpb_company_complaintsBInspect

Get recent complaints against a specific company (e.g., 'Wells Fargo'). Returns narratives, company responses, and resolution details sorted newest first.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-100, default 25)
companyYesCompany name (e.g., "BANK OF AMERICA", "CITIBANK", "JPMORGAN CHASE")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It discloses the tool is read-only ("Get") and returns sorted data, but does not mention potential rate limits, data freshness, or whether the company parameter is case-sensitive or requires exact matching. The description adds value but lacks depth on behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 17 words, concise and to the point. It front-loads the purpose and result type, though it could mention the sorting behavior earlier. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 2 params, no output schema, and no annotations, the description provides a minimal functional overview. It explains input (company) and output (complaint details, response info) but omits pagination, error handling, and the sorting detail (though it does say 'sorted by newest first'). It is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add parameter-level detail beyond the schema. It mentions 'company' implicitly but does not clarify that company names should be uppercase as shown in the schema example. The limit parameter is not discussed in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves consumer complaints for a specific company, sorted newest first, and returns details and company response information. This distinguishes it from siblings like cfpb_search_complaints (which likely allows broader search) and cfpb_get_complaint (probably a single complaint lookup).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide when to use this tool vs alternatives, nor does it mention when not to use it. It implies usage for company-specific complaints but lacks guidance on choosing between this, cfpb_search_complaints, or cfpb_top_companies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cfpb_get_complaintAInspect

Retrieve full details for a specific complaint by ID. Returns narrative, company response, resolution status, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
complaint_idYesCFPB complaint ID number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly indicates this is a read operation (no side effects) and requires a complaint ID. However, it doesn't mention rate limits, error conditions, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that conveys all essential information with no wasted words. It front-loads the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (one parameter, no output schema), the description adequately covers purpose and usage. It could mention the return format implicitly, but the output schema is absent, so some completeness is lost.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already has 100% description coverage for the single parameter, so baseline is 3. The description adds context by specifying that the ID is a 'CFPB complaint ID number', reinforcing the parameter's meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and clearly identifies the resource: 'full details for a single consumer complaint'. It uniquely distinguishes this tool from siblings like cfpb_search_complaints and cfpb_company_complaints by emphasizing a single complaint identified by ID.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need full details of one specific complaint, contrasting with search tools. However, it doesn't explicitly say when not to use it or mention alternatives for batch retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cfpb_product_breakdownBInspect

Get complaint counts by product category (e.g., 'Credit Card', 'Mortgage'). Filter by company or date range.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyNoOptional company name to filter by
end_dateNoEnd date in YYYY-MM-DD format
start_dateNoStart date in YYYY-MM-DD format
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It correctly indicates this is a read operation (getting counts) with optional filtering, but does not disclose return format, pagination, rate limits, or any side effects. The description is adequate but lacks detail for a non-annotated tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is concise and front-loaded with the core purpose. It mentions optional filters efficiently. It earns its place without redundancy, though it could be slightly more structured (e.g., listing parameters explicitly).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description is minimally viable for a tool with 3 optional parameters. It states the purpose and filtering options, but lacks details on return structure, pagination, or use cases. It is complete enough for a basic understanding but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the schema already describes each parameter well (company name, start/end date format). The description adds context by stating 'optional' filtering and grouping by product category, but does not elaborate on parameter constraints beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Get[s] complaint counts broken down by product category', which is a specific verb and resource. It distinguishes itself from sibling tools like cfpb_get_complaint (retrieves individual complaints) and cfpb_search_complaints (searches complaints), but does not explicitly differentiate from cfpb_top_companies, which may overlap in providing aggregated counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional filters ('by company and/or date range'), implying when to use them, but does not provide guidance on when NOT to use this tool or alternatives. No sibling differentiation or usage constraints are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cfpb_search_complaintsAInspect

Search consumer complaints by keyword, company, product, or date range. Returns complaint narratives, company responses, and resolution status.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results (1-100, default 25)
queryNoSearch term (e.g., "overdraft fees", "denied claim"). Optional if other filters provided.
companyNoCompany name to filter by (e.g., "BANK OF AMERICA", "WELLS FARGO")
productNoProduct category (e.g., "Credit card", "Mortgage", "Student loan", "Vehicle loan or lease", "Checking or savings account", "Credit reporting", "Debt collection")
end_dateNoEnd date in YYYY-MM-DD format
start_dateNoStart date in YYYY-MM-DD format
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

There are no annotations provided, so the description carries full burden. It describes the tool as a search that returns specific data, implying it is read-only and non-destructive, which is correct. It does not disclose any behavioral traits like rate limits, pagination behavior, or whether it returns raw or processed data, but it is adequate given the straightforward nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences. The first sentence states the action and key filters. The second sentence clarifies the return values. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 6 parameters all documented in the schema and no output schema, the description provides enough context on what it does and what it returns. It could mention that results are paginated (limit parameter is implied) or that start_date/end_date are required together, but overall it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, with all parameters having descriptions. The tool description does not add new meaning beyond listing filter types, but it does summarize the filters (keyword, company, product, date range) in a more accessible way. Given high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches a specific database (CFPB consumer complaint database) with a specific verb 'Search', and lists the resources returned (complaint narratives, company responses, resolution status). It also distinguishes itself from sibling tools like cfpb_get_complaint (single complaint) and cfpb_company_complaints (company-specific) by being a general search tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists filter options (keyword, company, product, date range), which gives context on when to use this tool. However, it does not explicitly state when not to use it or point to alternative sibling tools for more specific queries, so it misses some guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cfpb_top_companiesBInspect

Find companies with the most complaints in a date range. Returns ranked list with company names and complaint counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top companies to return (default 10)
productNoOptional product filter (e.g., "Mortgage", "Credit card")
end_dateNoEnd date in YYYY-MM-DD format
start_dateNoStart date in YYYY-MM-DD format
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states it returns the top companies in a date range, implying a read-only, aggregated query. However, it does not disclose behavioral traits such as rate limits, pagination, or whether the results are sorted by complaint count. The description adds context beyond the schema but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, with the purpose front-loaded. It is concise and avoids unnecessary detail, though it could be slightly more specific about the output nature without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description is adequate but not complete. It explains the basic purpose and usage context (identifying top-complaint companies), but does not describe the return format (e.g., list of company names with counts) or behavior like whether the limit parameter affects pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds no additional meaning beyond what the schema already provides for parameters. It does not explain the format of start_date/end_date or provide examples for product filter, relying solely on schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves companies with the most consumer complaints in a date range, specifying the resource ('companies') and action ('get the companies with the most consumer complaints'). It distinguishes from siblings like cfpb_search_complaints (which likely searches individual complaints) and cfpb_company_complaints (which may be per-company details), but does not explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it is useful for identifying companies with the most complaints, implying a top-N ranking use case. However, it does not provide explicit guidance on when to use this tool versus alternatives like cfpb_product_breakdown or cfpb_company_complaints, nor does it mention prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool returns 'the most relevant tools with names and descriptions,' but doesn't disclose whether it modifies state or requires special permissions. Since it's a search tool, it's likely read-only, but this is not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each providing essential information: what it does, what it returns, and when to use it. No unnecessary words, and the key action is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that this is a simple search tool with no output schema and only two parameters, the description covers the main aspects: purpose, input format, and usage context. It could mention if results are ranked or if there are any limitations, but overall it's sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining that 'limit' controls the maximum number of tools (default 20, max 50) and 'query' expects a natural language description, which enhances the schema's brief descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb 'search' and the resource 'tool catalog', and differentiates from siblings by mentioning it returns tool names and descriptions for selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent when to use it and implies it should be used before other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral aspects. It does not mention idempotency (e.g., deleting a non-existent key), error behavior, or authorization needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is front-loaded with the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema, no annotations), the description is adequate but could be improved by noting behavior for missing keys or idempotency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers the single parameter with 100% description coverage. The description does not add additional semantic value beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('a stored memory by key'). It distinguishes the tool from siblings like 'recall' and 'remember' by focusing on deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'remember' (create) or 'recall' (retrieve). The description does not mention any prerequisites or restrictions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the dual behavior (retrieve by key vs list all) and that memories persist across sessions. No annotations are provided, so the description carries the burden; it adequately covers the basic behaviors but doesn't detail edge cases (e.g., non-existent key) or performance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the core functionality and add usage context. Every word is necessary; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (one optional param, no output schema), the description is complete enough. It explains retrieval and listing behaviors and cross-session persistence. Could mention the output format briefly, but not essential.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a clear description for the 'key' parameter. The description adds the nuance that omitting the key lists all memories, which is not in the schema. Baseline 3 is appropriate as schema already covers the parameter well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves memories by key or lists all when key is omitted. It distinguishes from siblings like 'remember' (store) and 'forget' (delete), and uses specific verbs 'retrieve' and 'list' with the resource 'memory'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use it (to retrieve context saved earlier) and when to omit key (to list all). It does not mention when not to use it or compare directly with siblings, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses behavioral traits: key-value storage, session memory, and persistence duration (persistent for authenticated users, 24 hours for anonymous). This is sufficient for a simple store operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no waste. First sentence defines action, second explains usage context, third adds behavioral nuance about persistence. Front-loaded with core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (2 string params, no output schema), the description is complete. It explains storage mechanism, usage, and persistence without requiring additional return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear examples for key and value. The description adds context about what values can store (findings, addresses, preferences) but does not significantly augment the schema's meaning beyond examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool stores a key-value pair in session memory, with specific use cases like saving intermediate findings, user preferences, or context. It clearly distinguishes from siblings such as 'forget' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (to save context across tool calls) and mentions persistence differences between authenticated users and anonymous sessions. However, it does not explicitly state when not to use it or compare to alternatives like 'recall'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.