Attom
Server Details
ATTOM MCP — Premium real estate data from ATTOM Data Solutions
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-attom
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 13 of 13 tools scored. Lowest: 2.9/5.
Most tools have clear, distinct purposes (property detail, sales history, school search, etc.). However, 'ask_pipeworx' overlaps with all other tools by design, potentially confusing the agent on when to use it vs. specific tools.
Tools follow a consistent 'attom_' prefix for domain tools, but 'ask_pipeworx', 'discover_tools', 'forget', 'recall', 'remember' break the pattern. The mix of attom_* and generic memory/query tools reduces consistency.
13 tools is reasonable for a property data and memory-augmented assistant. The memory tools add value, though 'ask_pipeworx' and 'discover_tools' could be considered auxiliary. Still well-scoped.
Covers property detail, assessment, valuation, sales, trends, and schools. Missing features like property listing data, tax history deeper than assessment, or neighborhood stats. Gaps exist but core is present.
Available Tools
13 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes high-level behavior: selects tool, fills arguments, returns result. With no annotations, description carries burden. Could clarify if it calls external APIs or uses local data, and any limitations (e.g., timeouts, data freshness). Currently adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a purpose: states purpose, explains mechanism, provides examples. Could be more concise by merging first two sentences, but still efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given single parameter with full schema coverage and no output schema, description explains enough to use the tool. However, lacks information on response format or error handling, which could be important for an AI agent. Completeness is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'question' is described as 'Your question or request in natural language'. The description adds no new semantic detail beyond that. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool answers natural language questions by selecting the best data source and filling arguments. Distinguishes from siblings like 'discover_tools' and data-specific tools by offering a unified query interface.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage guidance: use natural language, no need to browse tools or learn schemas. Includes examples. Does not mention when not to use or specific alternatives, but context signals show no sibling with similar purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_assessmentAInspect
Check property tax assessment details. Returns assessed value, market value, tax amount, tax year, and historical trends.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | ATTOM API key | |
| address1 | Yes | Street address (e.g., "123 Main St") | |
| address2 | Yes | City, state ZIP (e.g., "Denver, CO 80202") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must convey behavioral traits. It states the tool retrieves data (read-only) and lists what's returned. However, it does not disclose any limitations, prerequisites (beyond API key), rate limits, or data freshness. With no annotations, a 3 is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with a dash to list items, concise and front-loaded with the core purpose. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers the core purpose and return data points. However, it lacks details on error handling, data coverage, or typical use cases. For a simple retrieval tool, it is mostly adequate but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema (e.g., address format). The tool has 3 parameters, all documented in the schema, and the description does not elaborate on them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves property tax assessment details, listing specific data points (assessed value, market value, tax amount, tax year, assessment history). It uses a specific verb ("Get") and resource ("property tax assessment details"), distinguishing it from siblings like attom_property_detail or attom_sales_history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (property tax queries) but provides no explicit guidance on when to use this tool versus siblings (e.g., attom_property_detail for broader property info). No alternative tools or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_avmAInspect
Estimate property market value. Returns estimated value, confidence score, and low/high range for valuation analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | ATTOM API key | |
| address1 | Yes | Street address (e.g., "123 Main St") | |
| address2 | Yes | City, state ZIP (e.g., "Denver, CO 80202") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes the output (value, confidence, range) but does not disclose any side effects, rate limits, or authentication requirements beyond the API key parameter. Since the parameter schema already requires an API key, this is not a gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb and resource, no wasted words. Clearly communicates purpose and outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description provides key outputs but lacks details on edge cases, error conditions, or property identification requirements. Adequate for a simple valuation tool with clear parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. Description does not add additional parameter semantics beyond what schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get automated valuation (AVM) for a property' and enumerates the outputs: estimated market value, confidence score, value range. It distinguishes itself from sibling tools like 'attom_rental_avm' and 'attom_assessment' by specifying the AVM focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when needing an AVM estimate for a property, but does not explicitly state when not to use it or compare with alternatives like attom_rental_avm for rental properties or attom_assessment for tax assessment.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_property_detailAInspect
Get detailed property specs by address. Returns lot size, square footage, bedrooms, bathrooms, year built, construction type, and heating/cooling systems.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | ATTOM API key | |
| address1 | Yes | Street address (e.g., "123 Main St") | |
| address2 | Yes | City, state ZIP (e.g., "Denver, CO 80202") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It clearly states it retrieves data (read operation) and lists the data categories, but does not disclose rate limits, required API key usage (though schema indicates it), or any side effects. It is accurate and consistent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently lists the key property characteristics. It is front-loaded with the action and resource, and every phrase adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple property detail lookup with good schema coverage, the description adequately explains what the tool returns. However, it could mention the output format (e.g., JSON) or clarify that the address parameters must be exact. Slight room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already describes each parameter. The description adds no additional semantics beyond summarizing what the tool returns, which matches the baseline for full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('full property characteristics by address'), listing concrete attributes (lot size, square footage, etc.). It clearly distinguishes from siblings like attom_property_search (which likely searches) and attom_assessment (which focuses on assessment data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you need property details by address, but does not explicitly state when not to use it (e.g., for sales history use attom_sales_history) or provide alternatives. No guidance on prerequisites or context beyond the address parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_property_searchBInspect
Search properties by location using postal code (e.g., '10001') or latitude/longitude with radius. Returns matching addresses and property IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| radius | No | Search radius in miles (use with latitude/longitude) | |
| _apiKey | Yes | ATTOM API key | |
| maxBeds | No | Maximum number of bedrooms | |
| minBeds | No | Minimum number of bedrooms | |
| latitude | No | Latitude for radius search (use with longitude and radius) | |
| longitude | No | Longitude for radius search (use with latitude and radius) | |
| postalCode | No | ZIP/postal code to search in | |
| maxYearBuilt | No | Maximum year built | |
| minYearBuilt | No | Minimum year built | |
| propertyType | No | Property type filter (e.g., "SFR", "CONDO", "APARTMENT") | |
| maxBathsTotal | No | Maximum total bathrooms | |
| minBathsTotal | No | Minimum total bathrooms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool searches by location and can filter by various criteria, but does not mention any side effects, authentication needs beyond the API key parameter, or rate limits. It is a query tool, so behavioral transparency is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with clear purpose. It could be slightly expanded to mention that the tool uses a specific API, but it is concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description provides the core functionality but lacks detail on how multiple filters combine, whether all are optional except API key, and what the output looks like (no output schema). Given 12 parameters, it is minimally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema: it explains the primary search methods (postal code or lat/lon+radius) but does not clarify the relationship between parameters (e.g., that postalCode excludes lat/lon/radius).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool searches properties by location with optional filters, and gives examples of location types (postal code or lat/lon + radius). It distinguishes from sibling tools like attom_property_detail which is for a single property, but does not explicitly name those alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use: when you need to search properties by location with optional filters. However, it does not explicitly state when not to use it or mention alternatives like attom_property_detail for a single property search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_rental_avmAInspect
Estimate rental property income. Returns estimated monthly rent, rental yield percentage, and rental value range.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | ATTOM API key | |
| address1 | Yes | Street address (e.g., "123 Main St") | |
| address2 | Yes | City, state ZIP (e.g., "Denver, CO 80202") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It describes the output (estimated monthly rent, rental yield, rental value range) but does not disclose potential behaviors such as whether the tool is read-only, any prerequisites (e.g., property must exist in ATTOM database), or error conditions. The lack of annotation increases the need for such detail, but the description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that clearly conveys the purpose and key outputs. Every word is meaningful and there is no redundancy. It is appropriately concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has three simple parameters, 100% schema coverage, and no output schema, the description is adequate but not thorough. It names the outputs but does not explain the format or possible variations. Since there is no output schema, the description could be more explicit about return values. Still, it covers the essential purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add additional meaning beyond what the schema provides; it only lists the output types. The schema already explains address1 and address2 clearly. No extra parameter semantics are offered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: obtaining a rental AVM with specific outputs (estimated monthly rent, rental yield, rental value range). It uses a specific verb ('Get') and resource ('rental property AVM'), and effectively distinguishes it from siblings like 'attom_avm' (general AVM) and 'attom_assessment' (property assessment).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (for rental property valuation) but provides no explicit guidance on when not to use it or alternatives. While siblings are listed, the description does not mention them or contrast with this tool, so an agent would need to infer usage context from the sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_sales_historyAInspect
Get past sales for a property. Returns sale dates, prices, deed types, and buyer/seller details from recent transactions.
| Name | Required | Description | Default |
|---|---|---|---|
| _apiKey | Yes | ATTOM API key | |
| address1 | Yes | Street address (e.g., "123 Main St") | |
| address2 | Yes | City, state ZIP (e.g., "Denver, CO 80202") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the full burden. It discloses the data scope (10 years) and included fields, but does not mention rate limits, pagination, or what happens if no data exists. The tool is read-only, but no explicit statement of non-destructiveness is made.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys purpose, scope, and key data fields. Every word adds value, and it is front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters (all required, simple strings), no output schema, and no annotations, the description provides a reasonable overview. However, it omits details like pagination, error behavior, and output format, which could be important for an agent using the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully described in the schema. The description adds no extra parameter-level meaning beyond what the schema already provides, but it contextualizes the parameters as inputs for a sales history query.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves complete sales history for a property, specifying the 10-year lookback and listing the included data fields (sale dates, prices, deed types, seller/buyer info). This distinguishes it from sibling tools like attom_assessment (assessments) and attom_property_detail (general property info).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for historical sales data but does not explicitly state when to use this tool versus alternatives like attom_sales_trend (which likely aggregates trends). No when-not-to-use guidance or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_sales_trendAInspect
Analyze market sales trends by ZIP code. Returns average/median sale price, sales volume, and price changes over time.
| Name | Required | Description | Default |
|---|---|---|---|
| geoid | Yes | ZIP code prefixed with "ZI" (e.g., "ZI80202") | |
| _apiKey | Yes | ATTOM API key | |
| endYear | Yes | End year (e.g., "2024") | |
| interval | Yes | Time interval: monthly, quarterly, or yearly | |
| startYear | Yes | Start year (e.g., "2020") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states it returns trends over time with specified metrics, which is useful. However, it does not mention data freshness, pagination, rate limits, or any side effects. Since the tool is a read operation, the absence of destructive warnings is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, efficient and front-loaded with the main purpose. It uses 18 words to convey the key information. Minor improvement could include more structure (e.g., bullet points) but it is clear and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 required parameters and no output schema, the description is adequate but not complete. It explains what the tool returns (trends, metrics) but does not detail the output format or whether additional filtering is possible. The tool is simple enough, so a 3 is reasonable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds context that the tool returns trend data over time, but does not explain parameter semantics beyond what the schema provides. For example, it doesn't clarify how 'interval' affects the output granularity or how 'startYear' and 'endYear' define the date range.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'market sales trends by ZIP code' and specifies the metrics: average/median sale price, volume, and price changes over time. It uses specific verbs and resource, differentiating it from sibling tools like attom_sales_history (which likely returns raw sales history) and attom_assessment (which covers property assessment data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for ZIP code market trend analysis but does not explicitly state when to use this tool versus alternatives like attom_sales_history or attom_avm. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
attom_school_searchBInspect
Find schools near a location. Returns school name, type (public/private), grade levels, distance, and performance rankings.
| Name | Required | Description | Default |
|---|---|---|---|
| radius | No | Search radius in miles (default 5, max 20) | |
| _apiKey | Yes | ATTOM API key | |
| latitude | Yes | Latitude of the search center | |
| longitude | Yes | Longitude of the search center |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries the burden of behavioral disclosure. It implies a read-only search operation, which is consistent with typical school search tools. However, it does not disclose rate limits, data freshness, or what happens if no schools are found. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the core purpose and enumerates key filtering dimensions. Every word contributes value, with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no output schema, and no annotations, the description is adequate for a simple search tool. It lists key filtering dimensions but does not explain what the response contains (e.g., school names, addresses, ranking details). For a search tool with no output schema, this is a gap, but not severe.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 4 parameters. The description lists searchable attributes (name, type, etc.) but these are not mapped to specific parameters in the schema. The agent knows the parameters from the schema, but the description does not add new semantic meaning beyond what the schema provides. Baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'search' and the resource 'schools near a location', listing searchable attributes (name, type, grades, distance, rankings). It distinguishes from sibling tools like attom_property_search or attom_assessment, which focus on property data, by explicitly mentioning schools and education-related fields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as attom_property_detail for property-specific data or other location-based tools. It does not mention prerequisites, limitations, or exclusion criteria. The agent must infer context from the sibling list alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description states it returns 'most relevant tools with names and descriptions' but doesn't explain details like whether it uses vector search or keyword matching. With no annotations provided, the description carries the full burden; it gives a high-level overview but lacks depth on behavior beyond basic return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: three sentences, each purposeful. First sentence defines action, second explains output, third gives usage context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers purpose, usage, and output format ('names and descriptions'). It's complete enough for a simple search tool. Could mention pagination or error cases but not strictly necessary for core function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already provides detailed descriptions for both parameters (query: 'Natural language description', limit: 'Maximum number'). The description adds value by reinforcing that query is natural language, but the schema already covers 100% of parameters well. A 4 reflects that the description doesn't add much beyond schema but schema is already strong.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb ('Search'), resource ('Pipeworx tool catalog'), and purpose ('finding right tools'). Explicitly distinguishes from siblings by positioning as a discovery/disambiguation tool to be called first when many tools exist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear when-to-use guidance and implies it's a prerequisite before other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It does not mention side effects (e.g., irreversible deletion), error behavior for missing keys, or return value. The description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It front-loads the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema, no annotations), the description is minimal but still missing context about error handling, permanence, and confirmation of deletion.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema's description of 'key' as a 'Memory key to delete'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete) and the resource (stored memory) and specifies the identifier (key). It effectively distinguishes from sibling tools like 'recall' and 'remember'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks guidance on when to use this tool versus alternatives. There is no mention of prerequisites or consequences, such as whether the key must exist or if deletion is permanent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It describes the core behavior (retrieve by key or list all) but does not disclose side effects, permissions, or limitations like whether memories persist across sessions. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading the primary action and then clarifying the optional behavior. Every sentence is necessary and no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single optional parameter, no output schema), the description is nearly complete. It covers both retrieval and listing modes. The only missing aspect is a hint about the return format, but that is acceptable without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single optional parameter 'key'. The description adds context beyond the schema by explaining the behavior when key is omitted (list all), which is not in the schema's parameter description. This adds meaningful value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a memory by key or lists all memories when key is omitted. It specifies the verb 'retrieve' and resource 'memory', distinguishing it from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates when to use (to retrieve context saved earlier) and how to use (omit key to list all). While it doesn't explicitly state when not to use or provide alternatives, the context is clear enough for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses memory persistence behavior: authenticated users get persistent memory, anonymous sessions last 24 hours. Since annotations are absent, the description carries the burden and does well, though it does not mention storage limits or data overwriting behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Each sentence adds distinct value: what it does, when to use it, and persistence behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple key-value storage, the description adequately explains behavior and usage. It does not need to explain return values since no output schema exists, but could mention if the tool returns success or error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining that value can be any text (findings, addresses, etc.), which goes beyond the schema's 'Value to store (any text — findings, addresses, preferences, notes)' – the schema already provides similar detail. Still, the description reinforces the purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Store a key-value pair in your session memory'. The verb 'store' and resource 'key-value pair' are specific. Distinct from siblings like 'forget' and 'recall', which serve complementary purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use this tool for saving intermediate findings, user preferences, or context across tool calls. However, it does not explicitly state when not to use it or suggest alternatives for similar tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!