Skip to main content
Glama

ReadyPermit Property Intelligence

Server Details

Property intelligence MCP: zoning, buildability, flood zones & environmental risk for U.S. addresses

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are mostly distinct with clear use-case separation. The comprehensive 'analyze_property' overlaps with specific tools (lookup_zoning, check_flood_zone, check_environmental_risks) in domain coverage, but the descriptions effectively distinguish between 'full reports' versus 'specific deep-dives' and include explicit USE WHEN clauses to guide selection.

Naming Consistency5/5

Excellent consistent snake_case formatting throughout with clear verb_noun patterns. Verbs are semantically precise: 'analyze' for comprehensive reports, 'check' for risk verification, 'calculate' for mathematical operations, 'lookup' for record retrieval, and 'search' for finding multiple items.

Tool Count5/5

Seven tools represents an ideal scope for property intelligence work. The set includes comprehensive analysis, quick screening, three specialized data lookups (zoning, flood, environmental), a calculation utility, and market comps—each earning its place without redundancy.

Completeness4/5

Strong coverage of the property due diligence workflow, addressing regulatory (zoning, flood), physical (environmental, buildable envelope), and financial (comps) dimensions. Minor gaps might include permit history or utility availability checks, but core buildability assessment is well-covered.

Available Tools

7 tools
analyze_propertyA
Read-onlyIdempotent
Inspect

Get complete property intelligence for any U.S. address — zoning, buildability, flood risk, environmental hazards, and lot data from 20+ government sources (FEMA, EPA, USGS, Census, Regrid). USE WHEN: user asks 'what can I build', 'is this property buildable', 'analyze this address', 'run a report on', 'can I build an ADU', 'tell me about this property', 'is this a good deal', or provides any U.S. street address. RETURNS: Buildability™ Score (0-100), zoning code, permitted uses, FEMA flood zone, setbacks, FAR, environmental risks, lot size, structure info, owner, and AI recommendation. Takes ~20 seconds. Replaces $2,000-$4,500 zoning consultant work.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull U.S. property address (e.g., '123 Main St, Denver, CO 80202'). Accepts any format — will be geocoded.
personaNoUser persona for tailored analysis (optional). Changes tone and emphasis: investor=deal metrics, developer=feasibility, homeowner=plain English, lender=collateral risk, broker=disclosure items.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable operational context beyond annotations: explicit timing ('Takes ~20 seconds'), data provenance ('FEMA, EPA, USGS, Census, Regrid'), and return value structure ('Buildability™ Score', 'AI recommendation'). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose → triggers → returns → timing → value proposition. Information-dense with minimal waste, though the consultant cost comparison ('Replaces $2,000-$4,500') is slightly marketing-oriented. Front-loaded with the core value proposition in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively documents return values ('Buildability™ Score (0-100), zoning code, permitted uses, FEMA flood zone...'), includes performance expectations ('~20 seconds'), and explains persona-driven behavior changes. Sufficient for a complex multi-source aggregation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both 'address' (with format examples) and 'persona' (with enum values explained). The description references addresses in the USE WHEN section but does not add semantic meaning to parameters beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('complete property intelligence'), clearly defining scope ('any U.S. address') and distinguishing from siblings like check_flood_zone or lookup_zoning by emphasizing comprehensiveness ('20+ government sources'). It explicitly covers zoning, buildability, flood risk, and environmental hazards.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'USE WHEN' section listing specific trigger phrases ('what can I build', 'is this property buildable', 'run a report on', etc.) that signal when to select this tool over narrower siblings like calculate_buildable_envelope or check_environmental_risks. Also clarifies input trigger ('provides any U.S. street address').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_buildable_envelopeA
Read-onlyIdempotent
Inspect

Calculate the maximum buildable area (building envelope) for a lot given zoning constraints. USE WHEN: user asks 'how much can I build', 'max square footage', 'what's the buildable area', 'calculate the envelope', 'how big can my house be', or has specific lot dimensions and zoning rules they want to model. RETURNS: max buildable square feet, max number of stories, envelope dimensions (length × width × height), usable footprint, and coverage math. Takes lot area, setbacks, FAR, height limit, and coverage as inputs — a pure calculation tool, does not query data.

ParametersJSON Schema
NameRequiredDescriptionDefault
max_farNoMaximum floor area ratio (e.g., 0.5, 2.0). FAR = building area / lot area.
lot_area_sqftYesTotal lot area in square feet
max_height_ftNoMaximum building height in feet
rear_setback_ftNoRear setback in feet
side_setback_ftNoSide setback in feet (each side)
front_setback_ftNoFront setback in feet (distance from front property line)
max_lot_coverageNoMaximum lot coverage as decimal (e.g., 0.45 for 45%)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint, destructiveHint), so description adds value by disclosing return structure (max square feet, stories, envelope dimensions) and clarifying the tool's nature as a local calculation engine without external data dependencies. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with clear functional headers (USE WHEN, RETURNS) and a concluding input summary. Every sentence serves a distinct purpose—defining scope, triggering conditions, output format, and input requirements. No redundant or filler content despite length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates by explicitly documenting return values (dimensions, coverage math). It adequately covers the 7-parameter input space and clarifies the tool's relationship to sibling query tools, providing sufficient context for agent selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema carries the parameter documentation burden. The description lists input categories ('lot area, setbacks, FAR, height limit, and coverage') which map to parameters but do not add semantic depth beyond the schema's technical definitions (e.g., does not explain interaction between setbacks and coverage).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb (Calculate) + resource (maximum buildable area/building envelope) + context (zoning constraints). It explicitly distinguishes from siblings by stating it is a 'pure calculation tool, does not query data'—contrasting with lookup_zoning or search_comparable_sales.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit 'USE WHEN' section listing specific user query patterns ('how much can I build', 'max square footage', etc.). Implicitly directs users to alternatives by clarifying it 'does not query data,' suggesting lookup_zoning for code lookups, and notes required inputs (lot dimensions, zoning rules) must be provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_environmental_risksA
Read-onlyIdempotent
Inspect

Run full environmental risk screening for a property using EPA, USGS, NOAA, and USDA data. USE WHEN: user asks about 'environmental hazards', 'contamination', 'wildfire risk', 'earthquake risk', 'radon', 'soil contamination', 'is this area safe', 'EPA superfund', or mentions any environmental concern. RETURNS: wildfire hazard zone, seismic risk zone, EPA contamination site proximity (Superfund, RCRA, brownfield), radon zone level, soil concerns, and combined risk score. Accepts a street address OR coordinates.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoFull U.S. property address (preferred — gives best results)
latitudeNoProperty latitude (use if no address available)
longitudeNoProperty longitude (use if no address available)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent), so the description adds value by disclosing external data sources (EPA, USGS, NOAA, USDA) and detailing return values (wildfire zone, seismic risk, etc.) since no output schema exists. Could improve by mentioning geographic limitations (US-only) or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with clear section headers (USE WHEN, RETURNS) and zero waste. Every sentence conveys essential information: purpose, trigger conditions, output specification, and prerequisites. Appropriate length for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates effectively with a detailed RETURNS clause listing all risk categories and the combined score. Prerequisites are documented. Minor gap: does not mention that data is US-specific or potential latency from external agency queries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (baseline 3), but the description elevates this by explaining the coordinate requirement workflow—specifically instructing to use analyze_property or lookup_zoning first if only an address is available, adding crucial semantic context for parameter acquisition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb+resource structure ('Run full environmental risk screening for a property') and explicitly names the data sources (EPA, USGS, NOAA, USDA) and risk types (wildfire, seismic, Superfund, radon) that distinguish it from siblings like check_flood_zone (flood-only) and analyze_property (general property data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'USE WHEN' section listing trigger phrases ('environmental hazards', 'EPA superfund', etc.) and explicitly states prerequisites: 'Requires coordinates — use analyze_property or lookup_zoning first to get them if you only have an address,' providing clear workflow guidance for sibling tool orchestration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_flood_zoneA
Read-onlyIdempotent
Inspect

Check FEMA National Flood Hazard Layer for any U.S. property. USE WHEN: user asks 'is this in a flood zone', 'do I need flood insurance', 'is this property flood-safe', 'FEMA flood map', 'is this in a 100-year flood plain', or mentions flood risk. RETURNS: FEMA zone code (X = low risk, A/AE = 100-year, V/VE = coastal high risk), flood insurance requirement (mandatory/optional), base flood elevation if applicable, and annual flood risk probability. Uses the official FEMA API.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull U.S. property address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent/destructive hints. Description adds critical context: 'Uses the official FEMA API' (confirms external dependency), and explains return value semantics (X=low risk, A/AE=100-year, V/VE=coastal) which annotations don't provide. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with clear section headers (USE WHEN, RETURNS). Four sentences each earning their place: purpose, trigger conditions, return value documentation, and data source authority. No redundancy or verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with no output schema, the description comprehensively compensates by detailing exact return fields (zone codes, insurance requirements, base flood elevation, annual risk probability). Covers all necessary context for an agent to use this effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'address' parameter, the schema carries the semantic burden. Description mentions 'U.S. property' aligning with the schema's 'Full U.S. property address' but adds no additional syntax guidance or format examples beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Check' + exact resource 'FEMA National Flood Hazard Layer' + scope 'any U.S. property'. Clearly distinguishes from sibling 'check_environmental_risks' by focusing specifically on FEMA flood data rather than general environmental risks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent 'USE WHEN:' section lists six specific query patterns (e.g., 'is this in a flood zone', 'do I need flood insurance') that trigger this tool. Lacks explicit mention of when NOT to use it or naming of sibling alternatives like 'check_environmental_risks' for broader environmental queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_buildability_scoreA
Read-onlyIdempotent
Inspect

Get a quick Buildability™ Score (0-100) for a property without running the full analysis. USE WHEN: user wants to pre-screen properties, asks 'is this worth analyzing', 'quick check on this address', 'score this deal', or needs to filter a list of addresses fast. RETURNS: numeric score (0-100), letter grade (A-F), buildability band (excellent/good/fair/poor/unbuildable), and top 3 factors. Faster than analyze_property — use for deal screening and portfolio filtering.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull U.S. property address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). The description adds crucial behavioral context: performance characteristic ('quick', 'faster than analyze_property'), and critically, details the return structure (score, letter grade, buildability band, top 3 factors) which compensates for the missing output schema. Does not mention rate limits or caching behavior, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with clear section markers ('USE WHEN:', 'RETURNS:'). Every sentence delivers value: purpose, usage triggers, output specification, and performance comparison. No filler or redundancy despite containing multiple distinct information types.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a tool lacking an output schema. The description fully specifies the return values (numeric ranges, enum values for grades/bands, count of factors). Sibling differentiation is explicit. Combined with strong annotations covering safety and idempotency, no critical information is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Full U.S. property address'), the baseline is 3. The description mentions 'this address' in the usage examples, providing minimal additional context, but does not need to compensate for schema gaps. No additional parameter semantics (format examples, validation rules) are provided beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and resource ('Buildability™ Score'), explicitly scopes it to 'quick' screening 'without running the full analysis', and differentiates from sibling 'analyze_property' by stating it's 'Faster than analyze_property'. Clearly distinguishes this as the lightweight pre-screening option.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'USE WHEN:' section listing specific user intents ('pre-screen properties', 'is this worth analyzing', 'quick check', 'filter a list of addresses fast'). Explicitly names the alternative tool ('analyze_property') and states when to prefer this one ('deal screening and portfolio filtering').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_zoningA
Read-onlyIdempotent
Inspect

Look up the zoning designation and development standards for any U.S. property address. USE WHEN: user asks 'what zone is this', 'what's the zoning', 'what can I build on this lot', 'is this residential or commercial', 'can I build a duplex here', or any question about permitted uses. RETURNS: zone code (e.g., R-1, C-2, MU-3), zone description, permitted uses list, conditional uses, overlay districts, max FAR, max height, setbacks (front/side/rear), lot coverage, and parking requirements.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesFull U.S. property address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover the safety profile (readOnly, idempotent, non-destructive), but the description adds valuable behavioral context by detailing the specific data returned in the 'RETURNS:' section (zone codes, FAR, setbacks, parking requirements). This compensates for the missing output schema. Minor gap: doesn't mention coverage limitations or external data dependencies implied by openWorldHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with three distinct sections (purpose, usage triggers, return values) that front-load critical information. The enumerated examples in 'USE WHEN' and 'RETURNS' are slightly verbose but serve a functional purpose for pattern matching. No filler content present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description comprehensively documents return values including specific zoning fields (FAR, setbacks, lot coverage) that would otherwise be unknown. Combined with clear annotations and single well-documented parameter, this provides complete contextual information for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Full U.S. property address'), the baseline is 3. The description reinforces this by specifying 'any U.S. property address' but doesn't add syntax details, format examples, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Look up') and clear resource ('zoning designation and development standards'), explicitly defining the tool's scope. It further distinguishes itself from siblings like check_flood_zone or search_comparable_sales by focusing on zoning codes, permitted uses, and development standards rather than environmental risks or sales data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains an explicit 'USE WHEN:' section that lists specific query patterns ('what zone is this', 'what can I build', 'can I build a duplex here'), providing clear signals for when to select this tool over alternatives like analyze_property or get_buildability_score. This is exemplary guidance that leaves little ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_comparable_salesA
Read-only
Inspect

Find recent comparable property sales and rental comps near a property. USE WHEN: user asks 'what are comps in this area', 'recent sales near here', 'what did similar houses sell for', 'price per square foot', 'market value estimate', 'rental comps', or needs comparable sales data. RETURNS: subject property AVM, list of recent sales with price, sqft, price/sqft, distance, beds/baths, and rental comps with rent amounts. Also includes local market stats. Useful for investor deal evaluation, CMA, and market analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoFull U.S. property address (preferred — gives best results)
latitudeNoCenter latitude for search (use if no address available)
longitudeNoCenter longitude for search (use if no address available)
radius_milesNoSearch radius in miles (default: 0.5, max: 5)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true. Description adds critical behavioral details: default radius of 0.5 miles, and comprehensive return value structure (price, date, sqft, distance) compensating for missing output schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four distinct sections (purpose, USE WHEN, RETURNS, use case) with zero filler. Front-loaded with action verb. Every sentence provides unique signal not redundant with structured fields. Efficient use of space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description fully documents return values (list structure and all fields). Combined with annotations and 100% param coverage, provides complete picture for a 3-parameter real estate tool. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (baseline 3). Description adds value by specifying the default radius behavior ('Defaults to 0.5 mile radius') which clarifies the optional radius_miles parameter semantics beyond the schema's 'default: 0.5' notation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Find' + resource 'comparable property sales' + scope 'near a location'. Distinguishes from siblings (e.g., analyze_property, lookup_zoning) by focusing specifically on comparable sales data and market comparables.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'USE WHEN' section with six specific trigger phrases (e.g., 'what are comps in this area', 'price per square foot'). Includes use case context ('investor deal evaluation'). Lacks explicit 'when not to use' or named sibling alternatives, preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources