Skip to main content
Glama

Server Details

Housing Intel MCP — Meta-pack that chains FRED, BLS, ATTOM, and HUD APIs

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-housing-intel
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: housing tools clearly target affordability, employment, market snapshot, property reports, rental analysis, and signal scanning. However, 'ask_pipeworx' overlaps with other tools by claiming to route queries to the best data source, potentially confusing it with specialized tools like housing_market_snapshot.

Naming Consistency4/5

Housing tools follow a consistent 'housing_<topic>' pattern (e.g., housing_affordability_check, housing_market_snapshot). However, 'ask_pipeworx', 'discover_tools', 'forget', 'recall', 'remember' deviate from this pattern, mixing verb phrases and generic terms.

Tool Count4/5

With 11 tools, the count is reasonable for a domain that combines housing market data, property analysis, and memory utilities. It's slightly above the typical sweet spot but each tool seems justified. The inclusion of generic tools (ask_pipeworx, discover_tools) adds weight.

Completeness4/5

The housing tools cover key aspects: affordability, employment, market snapshot, property details, rental analysis, and signal scanning. Missing explicit tools for historical trends or comparison, but the signal scan and meta-tools (ask_pipeworx) partially compensate. The memory utilities are a bonus.

Available Tools

11 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It clearly states the tool picks the right tool and fills arguments, implying it makes decisions autonomously. It does not disclose limits (e.g., if no data source can answer) or potential latency, but for a query tool the key behavior (auto-routing) is well-described.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, each adding value: purpose, behavior, examples. It is front-loaded with the core action. The examples earn their place by illustrating scope. Minor improvement: could be slightly tighter (e.g., combine first two sentences), but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema), the description is nearly complete. It covers purpose, usage, and input format. It could mention that the tool may invoke other tools (implicit from 'picks the right tool') but does not explain error handling or response format. For a query tool, this is acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds significant meaning: it explains that the parameter 'question' should be a natural language request, not a structured query. It provides examples that illustrate acceptable input formats, going beyond the schema's minimal 'Your question or request in natural language'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb+resource pattern: 'Ask a question... get an answer from the best available data source.' It explains the tool's role as an intelligent router, which distinguishes it from siblings like discover_tools (which lists tools) or housing_* tools (which are domain-specific).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use: 'No need to browse tools or learn schemas — just describe what you need.' It gives concrete examples ('What is the US trade deficit with China?'), implicitly distinguishing from direct tool calls. The context signals show many sibling tools are housing-specific, so this tool is the general-purpose question-answering alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'Returns the most relevant tools with names and descriptions', which is basic. However, it does not disclose search semantics (e.g., whether it uses semantic search or keyword matching), pagination behavior, or any side effects. A score of 3 is appropriate as it adds some value but lacks rich behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no wasted words. Each sentence serves a purpose: stating what the tool does, what it returns, and when to use it. Efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (search with 2 params, no output schema), the description is sufficiently complete. It explains purpose, usage, and behavior. Minor gap: it does not mention how results are ordered (relevance?) or what happens on no results, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters already described in the schema. The description adds a natural language example for 'query' and mentions default/max for 'limit', which is helpful but does not add significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search the Pipeworx tool catalog' and 'returns the most relevant tools with names and descriptions', with a specific verb and resource. It distinguishes from siblings by being the only search tool for the tool catalog, while sibling tools are domain-specific housing tools or memory operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task', providing clear when-to-use guidance. Also implies it's for discovery before using other tools, differentiating from sibling tools which are for specific tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must carry the full burden. It states the action is deletion (destructive), which is clear. However, it doesn't disclose side effects (e.g., whether deletion is permanent, if cascading deletion occurs, or any authorization requirements).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with 5 words, no filler, and immediately conveys the core action. It is front-loaded and earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 required param, no output schema, no annotations), the description is adequate but minimal. It lacks context on return value or error conditions, which could be helpful for a delete operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'key' described as 'Memory key to delete'. The description adds no further detail beyond the schema, but since coverage is high, the baseline is 3; the description is concise and aligned, earning a 4 for clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Delete') and resource ('stored memory by key'), exactly matching the tool name 'forget'. It distinguishes from siblings like 'remember' (create) and 'recall' (retrieve).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (when you want to delete a memory) but does not specify when not to use or provide alternatives. For example, no guidance on whether the key must exist or what happens if it doesn't.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

housing_affordability_checkAInspect

Check housing affordability in a market. Returns mortgage rate, median price, monthly payment, required income, and HUD limits. Optionally specify metro (e.g., "Denver").

ParametersJSON Schema
NameRequiredDescriptionDefault
_hudKeyNoHUD API token (optional — needed for income limits)
_fredKeyYesFRED API key
zip_codeNoZIP code for more specific HUD data (optional)
metro_nameNoMetro name for metro-level FHFA HPI (e.g., "Denver", "Savannah"). Optional.
state_codeYesTwo-letter state code for HUD income limits (e.g., "CO")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries full burden. It transparently lists all data sources and conditions (national vs. metro-level, optional API keys). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence listing multiple metrics, which is efficient. It front-loads the purpose. Slightly long due to enumeration but earns its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (5 parameters, no output schema), the description covers the key outputs and optional inputs. It could mention that _hudKey is optional, but that is already in the schema. Overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already describes each parameter. The description adds context by grouping outputs (e.g., 'metro-level FHFA HPI if metro_name provided') but does not add new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Check') and lists concrete resources (mortgage rate, median home price, HPI, earnings, payment, income limits). It clearly distinguishes itself from sibling tools like housing_market_snapshot or housing_rental_analysis by enumerating the metrics covered.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by listing what metrics are returned and conditionally mentions metro_name for HPI. However, it does not explicitly state when not to use this tool or suggest alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

housing_employment_outlookBInspect

Assess labor market health for housing demand. Returns employment, construction jobs, residential building employment, unemployment rate, and job openings.

ParametersJSON Schema
NameRequiredDescriptionDefault
_fredKeyNoFRED API key (accepted for consistency but not used — BLS is free)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states the tool uses BLS data (no key needed), which is helpful. However, it does not disclose any limitations (e.g., data frequency, delay, or what happens if no data found). A neutral score is appropriate as it adds some context but misses behavioral specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with key information. It efficiently lists indicators and data source. No wasted words, but could benefit from a brief note on what the tool returns.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema and is relatively simple, the description provides enough context for an agent to understand its inputs and data source. However, it lacks information on output format or how to interpret results, leaving some ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers the only parameter (_fredKey) with full description, so baseline is 3. The description adds context that FRED key is accepted but not used because BLS is free, which explains the parameter's presence and behavior. This is adequate but not exceptional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides labor market indicators relevant to housing, listing specific metrics and the data source. It distinguishes from siblings like housing_market_snapshot (broader market data) and housing_signal_scan (signals), but could be more precise about its distinct purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it should be used for obtaining labor market context for housing analysis, but does not explicitly state when to use it vs. alternatives. No exclusion criteria or sibling comparisons are provided, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

housing_market_snapshotAInspect

Get national housing market overview: mortgage rates, housing starts, Case-Shiller index, unemployment, construction employment. Optionally add metro-level prices (e.g., "Denver", "Atlanta").

ParametersJSON Schema
NameRequiredDescriptionDefault
_fredKeyYesFRED API key (https://fred.stlouisfed.org/docs/api/api_key.html)
metro_nameNoMetro area name for metro-level FHFA HPI (e.g., "Denver", "Atlanta"). Supports top 50 US metros. National data is always included.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool combines data from two sources (FRED and BLS) and notes a key difference in authentication key naming compared to the standalone attom pack. Since annotations are empty, the description carries full burden, and it provides useful behavioral context without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise, front-loading the main purpose. It contains a few sentences that could be tightened (e.g., the note about _attomKey), but overall it efficiently conveys the key information without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 params, no output schema), the description covers the inputs well and explains the data sources. It lacks details on output format or return values, but without an output schema, the description is still fairly complete for an agent to understand what the tool does.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the effect of metro_name (triggers FHFA HPI) and that metro_name supports top 50 US metros. This goes beyond the schema's generic 'Metro area name' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'national housing market snapshot', listing specific data points included. It distinguishes from siblings by mentioning that it combines FRED and BLS data, and contrasts with other tools like housing_affordability_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that when metro_name is provided, additional metro-level HPI is included, and that national data is always included. It does not explicitly say when not to use this tool or name alternatives, but the context of sibling tools implies distinct use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

housing_property_reportAInspect

Analyze a property by address and zip code. Returns valuation estimate, sales history, tax assessment, and detailed characteristics.

ParametersJSON Schema
NameRequiredDescriptionDefault
address1YesStreet address (e.g., "4529 Winona Court")
address2YesCity, state ZIP (e.g., "Denver, CO 80212")
_attomKeyYesATTOM API key (https://api.gateway.attomdata.com)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It discloses the meta-pack nature and key naming convention, but does not mention that it aggregates multiple API calls (performance implications), rate limits, or whether data is real-time vs cached. Annotations would have helped here.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long and front-loads the purpose. The note about _attomKey is a concise, valuable caveat. Could be slightly more concise by removing the example URL, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 required parameters, no output schema, and no annotations, the description adequately explains the tool's purpose and a critical usage detail. However, it lacks information about what the output contains, which would help agents decide if the response meets their needs. The schema covers all parameters, so the description meets minimum viability but has room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already has 100% coverage with descriptions for each parameter. The description adds no additional parameter semantics beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'complete property analysis combining ATTOM data' and lists specific data types (property details, AVM, sales history, tax assessment). This distinguishes it from siblings like housing_market_snapshot or housing_affordability_check, which focus on different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when a comprehensive property report is needed, and the note about _attomKey vs _apiKey provides important usage context. However, it does not explicitly state when to use alternatives (e.g., if only a specific data type is needed) or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

housing_rental_analysisBInspect

Evaluate rental investment potential by address and zip code. Returns estimated rent, fair market rents, and CPI rent trends.

ParametersJSON Schema
NameRequiredDescriptionDefault
_hudKeyNoHUD API token (optional — needed for fair market rents)
address1YesStreet address (e.g., "4529 Winona Court")
address2YesCity, state ZIP (e.g., "Denver, CO 80212")
_attomKeyYesATTOM API key
state_codeYesTwo-letter state code for HUD FMR lookup (e.g., "CO")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must disclose behavioral traits. It correctly notes that the HUD key is optional and that ATTOM uses a different key parameter. However, it does not mention any side effects, rate limits, or whether the tool modifies data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with a brief parenthetical note, effectively conveying the core functionality without redundancy. It is front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (5 parameters, no output schema), the description covers the main data sources but omits details like return format, error conditions, or typical response structure. The schema is well-documented, but the description could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already describes all parameters. The description adds value by explaining the purpose of _hudKey (optional) and _attomKey (for ATTOM), and by noting that state_code is used for HUD FMR lookup. This matches the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides rental market analysis including estimated rent, fair market rents, and CPI rent trends, clearly identifying the data sources (ATTOM, HUD, BLS). However, it does not differentiate itself from siblings like housing_market_snapshot or housing_affordability_check, which might overlap in purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions that HUD data requires a key and that ATTOM uses a specific parameter (_attomKey), but provides no guidance on when to use this tool vs. alternatives. There is no explicit when-not-to-use or comparison to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

housing_signal_scanAInspect

Scan 45+ housing indicators for anomalies and reversals. Flags unusual moves across rates, starts, sales, prices, wages, unemployment, and rent.

ParametersJSON Schema
NameRequiredDescriptionDefault
_fredKeyYesFRED API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It discloses that the tool checks 45+ indicators and returns flagged anomalies, which is moderately transparent. However, it doesn't mention latency, rate limits, or what happens on API failure (e.g., if _fredKey is invalid). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core purpose. It lists covered indicators efficiently. One minor issue: the list of indicators could be slightly shortened or referenced, but overall it's well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one required parameter and no output schema, the description adequately explains the scope (45+ indicators, categories) and the output nature ('returns flagged anomalies'). It is complete enough for an agent to decide to invoke it for anomaly detection in housing data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter _fredKey, which is described as 'FRED API key'. The description adds no further parameter info, so baseline 3 applies. The description mentions coverage of indicators but does not elaborate on parameter usage or formats.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool does a 'comprehensive housing market signal scan' covering 45+ indicators, lists specific categories, and says it 'returns flagged anomalies'. This is specific verb+resource, and it distinguishes itself from sibling tools like housing_market_snapshot which likely provide a snapshot without anomaly detection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for detecting market signals or anomalies, but provides no explicit guidance on when to use this vs. alternatives like housing_affordability_check or housing_market_snapshot. No exclusions or when-not-to-use are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Description correctly states it is a retrieval operation (no side effects implied). Lacks details on behavior when key not found, or performance with many memories. Minimal but sufficient for a simple read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, clear and front-loaded with main action. Every sentence adds value: first explains functionality, second gives usage context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with 1 optional parameter, description covers core behavior. Could mention return format (e.g., string or object) but not essential. Adequate for agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with description for 'key' in schema. Description adds nuance: 'omit to list all keys' clarifies behavior. No extra semantics beyond schema, but adequate since schema already documents parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it retrieves a memory by key or lists all memories, with verb 'Retrieve' and resource 'stored memory'. Distinguishes from sibling 'remember' (store) and 'forget' (delete). Slightly less precise because it doesn't specify if 'omit key' means empty or absent key.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States when to use: 'to retrieve context you saved earlier'. Implicitly suggests not for storing (use 'remember') or deleting (use 'forget'). Could be more explicit about alternatives, but context signals show clear siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry the burden. It discloses persistence behavior (persistent vs 24-hour) but does not mention any side effects, storage limits, overwrite behavior, or privacy implications. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, then usage context. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple key-value storage with no output schema and no nested objects, the description covers essential aspects: what, why, and persistence nuance. Minor gaps in storage limits or overwrite behavior, but overall complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters. Description adds context about the purpose of storing (findings, preferences, notes) but does not add significant meaning beyond the schema. Baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb ('Store'), resource ('key-value pair in session memory'), and purpose ('save intermediate findings, user preferences, or context across tool calls'). Distinguishes from siblings like 'forget' and 'recall' by explicitly mentioning memory storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes when to use (save context across calls) and mentions persistence behavior for authenticated vs anonymous sessions. Does not explicitly exclude alternatives or state when not to use, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.