Skip to main content
Glama

Server Details

Altos Research MCP — Real estate market intelligence

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-altos
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation4/5

The Altos real estate tools are distinct (listings, trends, stats, files), but 'ask_pipeworx' overlaps with the entire set by acting as a natural-language wrapper, and 'discover_tools' is meta rather than domain-specific. This causes some confusion.

Naming Consistency3/5

Altos tools follow a consistent 'altos_<noun>_<suffix>' pattern, but 'ask_pipeworx', 'discover_tools', 'forget', 'recall', and 'remember' break the pattern entirely, creating a mix of styles.

Tool Count4/5

11 tools is reasonable for a real estate data server plus memory utilities. The memory tools are justified for context, but 'ask_pipeworx' and 'discover_tools' add redundancy.

Completeness4/5

The real estate surface covers listings, stats, trends, and files, but lacks search by criteria (e.g., price range) or property details beyond basic attributes. Memory tools are complete but not part of the core domain.

Available Tools

11 tools
altos_active_listingsAInspect

Search active property listings in a region (e.g., "Denver, CO"). Returns address, price, beds, baths, square footage, and listing status.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate (must be a Friday, YYYY-MM-DD). Defaults to most recent Friday.
limitNoMax rows to return (default 100)
regionYesRegion code (e.g., "ca_los-angeles", "ca_94105")
_altosKeyYesAltos Research API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that data includes address, price, beds, baths, sqft, but does not mention that data is as of a Friday, or any auth or rate limit info. Basic transparency but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single clear sentence, no fluff. Could be slightly more structured (e.g., bullet points) but very concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description doesn't specify return structure or pagination. It lists some fields but not all possible ones. Adequate for a simple listing tool but lacks detail on ordering, defaults, or edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are already documented. Description adds context about the type of data returned but does not elaborate on parameter usage beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns active listing-level data for a region with specific fields (address, price, beds, baths, sqft). The verb 'Get' and resource 'active listings' are specific. Distinguishes from siblings like altos_inventory_trend and altos_market_stats which are trend/statistical tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for retrieving current listing details by region, but no explicit guidance on when to use this vs. alternatives like altos_new_listings or altos_pending_sales. No exclusions or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

altos_inventory_trendAInspect

Track weekly inventory changes for a region (e.g., "Austin, TX"). Returns trends in inventory, new listings, days on market, median price, and price reductions.

ParametersJSON Schema
NameRequiredDescriptionDefault
weeksNoNumber of weeks to look back (default 12, max 52)
regionYesRegion code (e.g., "us_national", "ca_los-angeles")
_altosKeyYesAltos Research API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool tracks multiple metrics over weeks, but does not mention side effects, API key requirements (only in schema), rate limits, or output format. The description adds value beyond the schema by naming the metrics, but behavioral details are limited.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence listing key metrics, which is concise. It could be slightly improved by front-loading the most critical information, but it is not verbose. Every part serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema and moderate complexity (multiple metrics), the description provides a good overview of what data is returned. However, it lacks information on the data format (e.g., time series) or how results are structured, which would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description's parameter info is supplemental. It explains the weeks parameter implicitly through 'over multiple weeks' and the trend concept, but does not clarify the exact role of region or _altosKey beyond schema descriptions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs like 'Get inventory trend' and names multiple tracked metrics (inventory, new listings, days on market, median price, percent price decreased), clearly distinguishing it from sibling tools like altos_active_listings or altos_new_listings which focus on single snapshots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for multi-week trend analysis, but does not explicitly state when to use this vs. other listing tools or provide exclusions. The sibling context suggests differentiation by trend vs. snapshot, but the description lacks direct guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

altos_list_filesAInspect

Browse downloadable regional real estate data files. Returns catalog with file names, formats, and descriptions.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoData type: "stats", "listings", "listings-new", "pendings" (default: "stats")
regionNoRegion code (default: "us_national")
_altosKeyYesAltos Research API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It describes the tool as listing/cataloging files, implying read-only behavior. No mention of pagination, rate limits, or what happens if region is invalid. Adequate but minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no filler. Every word is meaningful. Front-loaded with verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention what is returned (e.g., list of file names). However, it is complete enough for a simple listing tool with well-named siblings. Slight lack of detail on return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add parameter details beyond schema. It mentions 'region' in the description but does not elaborate on format or defaults already in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists available data files for a region, distinguishing it from siblings that focus on specific data (e.g., altos_active_listings). However, it could be more specific about the verb 'List' being a read operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when needing to discover downloadable files, but no explicit guidance on when to use this vs. siblings. Sibling names suggest this is a discovery step before using other tools, but this is not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

altos_market_statsAInspect

Get current market snapshot for a region (e.g., "San Francisco, CA"). Returns inventory count, new listings, median price, days on market, and market action index.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate (must be a Friday, YYYY-MM-DD). Defaults to most recent Friday.
regionYesRegion code (e.g., "us_national", "ca_los-angeles", "ca_94105")
quartileNoPrice quartile: "ALL", "FIRST", "SECOND", "THIRD", "FOURTH". Default: ALL.
res_typeNoResidential type filter: "single_family" or "multi_family". Default: single_family.
_altosKeyYesAltos Research API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates the tool returns aggregated market statistics, which implies a read operation. No annotations are provided, so the description carries the full burden. It does not mention rate limits, authentication beyond the API key, or any behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the tool's purpose and lists key metrics. It is concise and has no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has no output schema and the input schema fully documents parameters, the description adequately explains the return value (aggregated market statistics with specific metrics). It covers the essential information for an agent to use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already has 100% description coverage for all 5 parameters. The description adds no extra meaning beyond what the schema provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and identifies the resource ('aggregated market statistics for a region') and the exact metrics provided (inventory, new listings, median price, days on market, market action index). This clearly distinguishes it from sibling tools like altos_active_listings and altos_new_listings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states what the tool does but does not specify when to use it versus the other Altos tools. It does not provide explicit guidance on prerequisites or use cases beyond the metrics listed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

altos_new_listingsAInspect

Get freshly listed properties (under one week on market) for a region (e.g., "Boston, MA"). Returns address, price, beds, baths, and listing date.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate (must be a Friday, YYYY-MM-DD). Defaults to most recent Friday.
limitNoMax rows to return (default 100)
regionYesRegion code (e.g., "ca_los-angeles", "ca_94105")
_altosKeyYesAltos Research API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries the burden. It indicates the tool returns listings filtered by listing date (less than a week), which adds behavioral context. However, it does not disclose pagination, rate limits, or whether data is cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that efficiently conveys the purpose. No wasted words, and the key filter is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is a simple listing retrieval with a filter, the description covers the core behavior. However, without an output schema, some details about the returned data format would be helpful. The tool is part of a suite, but context is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond what the schema provides for parameters like 'date' or 'limit'. No additional semantic detail about 'region' format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'new listings', and the specific filter 'on market less than a week'. It distinguishes itself from sibling tools like 'altos_active_listings' and 'altos_pending_sales' by emphasizing freshness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (for recently listed properties) but does not explicitly state when not to use it or mention alternatives among siblings. No guidance on region code format or date constraints beyond input schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

altos_pending_salesAInspect

Find properties under contract in a region (e.g., "Miami, FL"). Returns address, price, beds, baths, and days pending.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate (must be a Friday, YYYY-MM-DD). Defaults to most recent Friday.
limitNoMax rows to return (default 100)
regionYesRegion code (e.g., "ca_los-angeles", "ca_94105")
_altosKeyYesAltos Research API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses the status of properties (under contract) but does not mention behavioral traits like whether it is read-only, rate limits, or authentication requirements beyond the API key.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that concisely conveys the tool's purpose with no superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description is mostly complete. It explains what the tool returns but does not mention pagination, default behavior for missing date, or how region codes are formatted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, so baseline is 3. The description adds no additional meaning beyond the schema, so no higher score is warranted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves pending sales for a region, defines 'under contract' as having accepted offers but not yet closed, and distinguishes it from sibling tools like altos_active_listings (active listings) and altos_new_listings (new listings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly guides usage by specifying it is for properties under contract, but does not explicitly state when not to use it or mention alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains that the tool dynamically selects the best data source and fills arguments, which is key behavioral info beyond the schema. No annotations are provided, so the description bears the full burden, and it does so adequately by indicating the autonomous decision-making nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences) and front-loaded with the core purpose. Examples are helpful but add length. Could be slightly tighter by removing 'No need to browse tools or learn schemas' as it's implied, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input (one string parameter), no output schema, and no annotations, the description is nearly complete. It explains what the tool does, how it works, and provides examples. Lacks mention of response format or potential limitations, but still adequate for a straightforward tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema coverage is 100%, so the baseline is 3. The description adds value by explaining that the 'question' parameter should be a natural language request and gives concrete examples, making the parameter's usage clearer than the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering plain English questions by selecting the best data source and returning results. It provides specific examples that illustrate the breadth of possible queries, distinguishing it from sibling tools like 'discover_tools' which likely list available tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells users when to use this tool (for any natural language question) and emphasizes not needing to browse tools or learn schemas. However, it does not explicitly mention when not to use it or provide alternatives for specific sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states that the tool returns 'the most relevant tools with names and descriptions', which is transparent about output. However, it does not disclose details like whether it uses semantic search or how ranking works, which could be useful but not critical.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no filler. First sentence states purpose, second describes return value, third gives explicit when-to-use instruction. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 params, no output schema, no nested objects), the description fully covers purpose, usage, and parameter semantics. No gaps are present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that query is a 'Natural language description' and gives examples, and that limit controls result count with defaults. This goes beyond the schema's generic descriptions, earning a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', and distinguishes the tool as the one to call FIRST when needing to find tools among 500+ options. This effectively differentiates it from sibling tools which are specific data tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to 'Call this FIRST when you have 500+ tools available and need to find the right ones', providing clear when-to-use guidance. It also implies not to use this for executing specific data tasks, as sibling tools like altos_* and ask_pipeworx handle those.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states it deletes a memory, which implies mutation. No annotations provided, so description carries burden. Could mention irreversibility or side effects, but the single action is straightforward.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, no fluff. Front-loaded with action and object.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete operation with one param and no output schema, the description is nearly complete. Could mention if deletion is permanent or requires confirmation, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and parameter description ('Memory key to delete') is clear. Description adds no additional context beyond schema, so baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the action ('Delete'), resource ('stored memory'), and identifier ('by key'). It distinguishes from siblings like 'recall' (retrieval) and 'remember' (storage).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage: use when you need to delete a memory. No explicit guidance on when not to use or comparison with alternatives, but given simplicity, it's adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the tool's behavior (retrieve by key or list all) but doesn't mention any side effects, persistence details, or scope (session vs. cross-session), though it hints at cross-session with 'previous sessions'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with the primary action and clearly explains the optional behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, no output schema), the description is nearly complete. It could mention the return format or that memory is persistent, but this is minor.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single optional parameter. The description adds meaning beyond the schema by explaining that omitting the key lists all memories, which is not clear from the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It distinguishes itself from siblings like 'remember' (store) and 'forget' (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to omit the key (to list all memories) and mentions retrieving context saved earlier, but doesn't explicitly state when not to use it or point to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses persistence differences between authenticated (persistent) and anonymous (24-hour) sessions, which is useful. However, lacks details on idempotency, overwrite behavior, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second gives usage guidance. No wasted words. Front-loaded with action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with only two required parameters and no output schema. Description is sufficient for an agent to understand when to use it (saving context) and the retention policy. Lacks info on key naming conventions beyond examples, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema; key and value are self-explanatory. No need for extra elaboration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Store a key-value pair in your session memory' with a specific verb and resource. Differentiates from sibling tools like 'recall' (retrieve) and 'forget' (delete) by focusing on writing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this to save intermediate findings, user preferences, or context across tool calls.' Provides clear usage context but does not explicitly say when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.