Skip to main content
Glama

Server Details

ATTOM MCP — Premium real estate data from ATTOM Data Solutions

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-attom
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 13 of 13 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have clear, distinct purposes (property detail, sales history, school search, etc.). However, 'ask_pipeworx' overlaps with all other tools by design, potentially confusing the agent on when to use it vs. specific tools.

Naming Consistency3/5

Tools follow a consistent 'attom_' prefix for domain tools, but 'ask_pipeworx', 'discover_tools', 'forget', 'recall', 'remember' break the pattern. The mix of attom_* and generic memory/query tools reduces consistency.

Tool Count4/5

13 tools is reasonable for a property data and memory-augmented assistant. The memory tools add value, though 'ask_pipeworx' and 'discover_tools' could be considered auxiliary. Still well-scoped.

Completeness3/5

Covers property detail, assessment, valuation, sales, trends, and schools. Missing features like property listing data, tax history deeper than assessment, or neighborhood stats. Gaps exist but core is present.

Available Tools

13 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes high-level behavior: selects tool, fills arguments, returns result. With no annotations, description carries burden. Could clarify if it calls external APIs or uses local data, and any limitations (e.g., timeouts, data freshness). Currently adequate but not detailed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each serving a purpose: states purpose, explains mechanism, provides examples. Could be more concise by merging first two sentences, but still efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given single parameter with full schema coverage and no output schema, description explains enough to use the tool. However, lacks information on response format or error handling, which could be important for an AI agent. Completeness is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter 'question' is described as 'Your question or request in natural language'. The description adds no new semantic detail beyond that. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool answers natural language questions by selecting the best data source and filling arguments. Distinguishes from siblings like 'discover_tools' and data-specific tools by offering a unified query interface.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage guidance: use natural language, no need to browse tools or learn schemas. Includes examples. Does not mention when not to use or specific alternatives, but context signals show no sibling with similar purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

attom_assessmentAInspect

Check property tax assessment details. Returns assessed value, market value, tax amount, tax year, and historical trends.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesATTOM API key
address1YesStreet address (e.g., "123 Main St")
address2YesCity, state ZIP (e.g., "Denver, CO 80202")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description must convey behavioral traits. It states the tool retrieves data (read-only) and lists what's returned. However, it does not disclose any limitations, prerequisites (beyond API key), rate limits, or data freshness. With no annotations, a 3 is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with a dash to list items, concise and front-loaded with the core purpose. No superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers the core purpose and return data points. However, it lacks details on error handling, data coverage, or typical use cases. For a simple retrieval tool, it is mostly adequate but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema (e.g., address format). The tool has 3 parameters, all documented in the schema, and the description does not elaborate on them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves property tax assessment details, listing specific data points (assessed value, market value, tax amount, tax year, assessment history). It uses a specific verb ("Get") and resource ("property tax assessment details"), distinguishing it from siblings like attom_property_detail or attom_sales_history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (property tax queries) but provides no explicit guidance on when to use this tool versus siblings (e.g., attom_property_detail for broader property info). No alternative tools or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

attom_avmAInspect

Estimate property market value. Returns estimated value, confidence score, and low/high range for valuation analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesATTOM API key
address1YesStreet address (e.g., "123 Main St")
address2YesCity, state ZIP (e.g., "Denver, CO 80202")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes the output (value, confidence, range) but does not disclose any side effects, rate limits, or authentication requirements beyond the API key parameter. Since the parameter schema already requires an API key, this is not a gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with verb and resource, no wasted words. Clearly communicates purpose and outputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description provides key outputs but lacks details on edge cases, error conditions, or property identification requirements. Adequate for a simple valuation tool with clear parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all parameters. Description does not add additional parameter semantics beyond what schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get automated valuation (AVM) for a property' and enumerates the outputs: estimated market value, confidence score, value range. It distinguishes itself from sibling tools like 'attom_rental_avm' and 'attom_assessment' by specifying the AVM focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing an AVM estimate for a property, but does not explicitly state when not to use it or compare with alternatives like attom_rental_avm for rental properties or attom_assessment for tax assessment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

attom_property_detailAInspect

Get detailed property specs by address. Returns lot size, square footage, bedrooms, bathrooms, year built, construction type, and heating/cooling systems.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesATTOM API key
address1YesStreet address (e.g., "123 Main St")
address2YesCity, state ZIP (e.g., "Denver, CO 80202")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It clearly states it retrieves data (read operation) and lists the data categories, but does not disclose rate limits, required API key usage (though schema indicates it), or any side effects. It is accurate and consistent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently lists the key property characteristics. It is front-loaded with the action and resource, and every phrase adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple property detail lookup with good schema coverage, the description adequately explains what the tool returns. However, it could mention the output format (e.g., JSON) or clarify that the address parameters must be exact. Slight room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already describes each parameter. The description adds no additional semantics beyond summarizing what the tool returns, which matches the baseline for full coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('full property characteristics by address'), listing concrete attributes (lot size, square footage, etc.). It clearly distinguishes from siblings like attom_property_search (which likely searches) and attom_assessment (which focuses on assessment data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you need property details by address, but does not explicitly state when not to use it (e.g., for sales history use attom_sales_history) or provide alternatives. No guidance on prerequisites or context beyond the address parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

attom_rental_avmAInspect

Estimate rental property income. Returns estimated monthly rent, rental yield percentage, and rental value range.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesATTOM API key
address1YesStreet address (e.g., "123 Main St")
address2YesCity, state ZIP (e.g., "Denver, CO 80202")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It describes the output (estimated monthly rent, rental yield, rental value range) but does not disclose potential behaviors such as whether the tool is read-only, any prerequisites (e.g., property must exist in ATTOM database), or error conditions. The lack of annotation increases the need for such detail, but the description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that clearly conveys the purpose and key outputs. Every word is meaningful and there is no redundancy. It is appropriately concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has three simple parameters, 100% schema coverage, and no output schema, the description is adequate but not thorough. It names the outputs but does not explain the format or possible variations. Since there is no output schema, the description could be more explicit about return values. Still, it covers the essential purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add additional meaning beyond what the schema provides; it only lists the output types. The schema already explains address1 and address2 clearly. No extra parameter semantics are offered.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: obtaining a rental AVM with specific outputs (estimated monthly rent, rental yield, rental value range). It uses a specific verb ('Get') and resource ('rental property AVM'), and effectively distinguishes it from siblings like 'attom_avm' (general AVM) and 'attom_assessment' (property assessment).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (for rental property valuation) but provides no explicit guidance on when not to use it or alternatives. While siblings are listed, the description does not mention them or contrast with this tool, so an agent would need to infer usage context from the sibling names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

attom_sales_historyAInspect

Get past sales for a property. Returns sale dates, prices, deed types, and buyer/seller details from recent transactions.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesATTOM API key
address1YesStreet address (e.g., "123 Main St")
address2YesCity, state ZIP (e.g., "Denver, CO 80202")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the full burden. It discloses the data scope (10 years) and included fields, but does not mention rate limits, pagination, or what happens if no data exists. The tool is read-only, but no explicit statement of non-destructiveness is made.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys purpose, scope, and key data fields. Every word adds value, and it is front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters (all required, simple strings), no output schema, and no annotations, the description provides a reasonable overview. However, it omits details like pagination, error behavior, and output format, which could be important for an agent using the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are fully described in the schema. The description adds no extra parameter-level meaning beyond what the schema already provides, but it contextualizes the parameters as inputs for a sales history query.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves complete sales history for a property, specifying the 10-year lookback and listing the included data fields (sale dates, prices, deed types, seller/buyer info). This distinguishes it from sibling tools like attom_assessment (assessments) and attom_property_detail (general property info).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for historical sales data but does not explicitly state when to use this tool versus alternatives like attom_sales_trend (which likely aggregates trends). No when-not-to-use guidance or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

attom_sales_trendAInspect

Analyze market sales trends by ZIP code. Returns average/median sale price, sales volume, and price changes over time.

ParametersJSON Schema
NameRequiredDescriptionDefault
geoidYesZIP code prefixed with "ZI" (e.g., "ZI80202")
_apiKeyYesATTOM API key
endYearYesEnd year (e.g., "2024")
intervalYesTime interval: monthly, quarterly, or yearly
startYearYesStart year (e.g., "2020")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states it returns trends over time with specified metrics, which is useful. However, it does not mention data freshness, pagination, rate limits, or any side effects. Since the tool is a read operation, the absence of destructive warnings is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, efficient and front-loaded with the main purpose. It uses 18 words to convey the key information. Minor improvement could include more structure (e.g., bullet points) but it is clear and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 required parameters and no output schema, the description is adequate but not complete. It explains what the tool returns (trends, metrics) but does not detail the output format or whether additional filtering is possible. The tool is simple enough, so a 3 is reasonable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds context that the tool returns trend data over time, but does not explain parameter semantics beyond what the schema provides. For example, it doesn't clarify how 'interval' affects the output granularity or how 'startYear' and 'endYear' define the date range.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'market sales trends by ZIP code' and specifies the metrics: average/median sale price, volume, and price changes over time. It uses specific verbs and resource, differentiating it from sibling tools like attom_sales_history (which likely returns raw sales history) and attom_assessment (which covers property assessment data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for ZIP code market trend analysis but does not explicitly state when to use this tool versus alternatives like attom_sales_history or attom_avm. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states it returns 'most relevant tools with names and descriptions' but doesn't explain details like whether it uses vector search or keyword matching. With no annotations provided, the description carries the full burden; it gives a high-level overview but lacks depth on behavior beyond basic return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: three sentences, each purposeful. First sentence defines action, second explains output, third gives usage context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers purpose, usage, and output format ('names and descriptions'). It's complete enough for a simple search tool. Could mention pagination or error cases but not strictly necessary for core function.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already provides detailed descriptions for both parameters (query: 'Natural language description', limit: 'Maximum number'). The description adds value by reinforcing that query is natural language, but the schema already covers 100% of parameters well. A 4 reflects that the description doesn't add much beyond schema but schema is already strong.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb ('Search'), resource ('Pipeworx tool catalog'), and purpose ('finding right tools'). Explicitly distinguishes from siblings by positioning as a discovery/disambiguation tool to be called first when many tools exist.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and implies it's a prerequisite before other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It does not mention side effects (e.g., irreversible deletion), error behavior for missing keys, or return value. The description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It front-loads the action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema, no annotations), the description is minimal but still missing context about error handling, permanence, and confirmation of deletion.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema's description of 'key' as a 'Memory key to delete'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete) and the resource (stored memory) and specifies the identifier (key). It effectively distinguishes from sibling tools like 'recall' and 'remember'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lacks guidance on when to use this tool versus alternatives. There is no mention of prerequisites or consequences, such as whether the key must exist or if deletion is permanent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It describes the core behavior (retrieve by key or list all) but does not disclose side effects, permissions, or limitations like whether memories persist across sessions. The description is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading the primary action and then clarifying the optional behavior. Every sentence is necessary and no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single optional parameter, no output schema), the description is nearly complete. It covers both retrieval and listing modes. The only missing aspect is a hint about the return format, but that is acceptable without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single optional parameter 'key'. The description adds context beyond the schema by explaining the behavior when key is omitted (list all), which is not in the schema's parameter description. This adds meaningful value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It specifies the verb 'retrieve' and resource 'memory', distinguishing it from sibling tools like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use (to retrieve context saved earlier) and how to use (omit key to list all). While it doesn't explicitly state when not to use or provide alternatives, the context is clear enough for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses memory persistence behavior: authenticated users get persistent memory, anonymous sessions last 24 hours. Since annotations are absent, the description carries the burden and does well, though it does not mention storage limits or data overwriting behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Each sentence adds distinct value: what it does, when to use it, and persistence behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple key-value storage, the description adequately explains behavior and usage. It does not need to explain return values since no output schema exists, but could mention if the tool returns success or error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that value can be any text (findings, addresses, etc.), which goes beyond the schema's 'Value to store (any text — findings, addresses, preferences, notes)' – the schema already provides similar detail. Still, the description reinforces the purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Store a key-value pair in your session memory'. The verb 'store' and resource 'key-value pair' are specific. Distinct from siblings like 'forget' and 'recall', which serve complementary purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use this tool for saving intermediate findings, user preferences, or context across tool calls. However, it does not explicitly state when not to use it or suggest alternatives for similar tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.