ReadyPermit — Property Zoning & Buildability Intelligence
Server Details
Zoning, ADU eligibility, flood zone, setbacks, and buildability intelligence for U.S. parcels.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 7 of 7 tools scored.
Most tools have distinct purposes, but there is some overlap between analyze_property and get_buildability_score, as both provide buildability scores and can be used for property assessment. The descriptions clarify that get_buildability_score is a faster, screening-focused tool, which helps reduce confusion, but an agent might still hesitate between them for certain queries.
All tool names follow a consistent verb_noun pattern (e.g., analyze_property, calculate_buildable_envelope, check_environmental_risks), with clear and descriptive naming. There are no deviations in style or convention, making the set highly predictable and easy to navigate.
With 7 tools, the count is well-scoped for the server's purpose of property zoning and buildability intelligence. Each tool addresses a specific aspect of property analysis, such as zoning, environmental risks, flood zones, and comparable sales, ensuring comprehensive coverage without being overwhelming.
The tool set provides complete coverage for property intelligence, including analysis, zoning lookup, environmental and flood risk checks, buildability calculations, and market comps. There are no obvious gaps; agents can handle a wide range of queries from initial screening to detailed reports without dead ends.
Available Tools
7 toolsanalyze_propertyARead-onlyIdempotentInspect
Get complete property intelligence for any U.S. address — zoning, buildability, flood risk, environmental hazards, and lot data from 20+ government sources (FEMA, EPA, USGS, Census, Regrid). USE WHEN: user asks 'what can I build', 'is this property buildable', 'analyze this address', 'run a report on', 'can I build an ADU', 'tell me about this property', 'is this a good deal', or provides any U.S. street address. RETURNS: Buildability™ Score (0-100), zoning code, permitted uses, FEMA flood zone, setbacks, FAR, environmental risks, lot size, structure info, owner, and AI recommendation. Takes ~20 seconds. Replaces $2,000-$4,500 zoning consultant work.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full U.S. property address (e.g., '123 Main St, Denver, CO 80202'). Accepts any format — will be geocoded. | |
| persona | No | User persona for tailored analysis (optional). Changes tone and emphasis: investor=deal metrics, developer=feasibility, homeowner=plain English, lender=collateral risk, broker=disclosure items. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies a processing time ('Takes ~20 seconds'), mentions data sources ('20+ government sources'), and notes cost savings. Annotations cover safety (readOnlyHint, non-destructive), but the description enriches this with practical details without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose, followed by usage guidelines and returns. It uses bullet-like lists for clarity but includes some marketing language ('Buildability™ Score', cost comparison) that slightly reduces efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity and lack of output schema, the description provides a comprehensive overview of returns (e.g., Buildability Score, zoning code, AI recommendation) and context (data sources, processing time). It adequately compensates for missing structured output details, though more on error handling could improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already documents both parameters well. The description does not add meaning beyond the schema; it focuses on overall tool function rather than parameter details. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get complete property intelligence') and resources ('any U.S. address'), listing detailed outputs like zoning, flood risk, and environmental hazards. It distinguishes from siblings by emphasizing comprehensive analysis versus more specific tools like 'check_flood_zone' or 'lookup_zoning'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidelines with a 'USE WHEN:' section listing specific user queries (e.g., 'what can I build', 'analyze this address') and scenarios. It implies alternatives by noting it 'Replaces $2,000-$4,500 zoning consultant work,' suggesting it's a comprehensive option over piecemeal tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_buildable_envelopeARead-onlyIdempotentInspect
Calculate the maximum buildable area (building envelope) for a lot given zoning constraints. USE WHEN: user asks 'how much can I build', 'max square footage', 'what's the buildable area', 'calculate the envelope', 'how big can my house be', or has specific lot dimensions and zoning rules they want to model. RETURNS: max buildable square feet, max number of stories, envelope dimensions (length × width × height), usable footprint, and coverage math. Takes lot area, setbacks, FAR, height limit, and coverage as inputs — a pure calculation tool, does not query data.
| Name | Required | Description | Default |
|---|---|---|---|
| max_far | No | Maximum floor area ratio (e.g., 0.5, 2.0). FAR = building area / lot area. | |
| lot_area_sqft | Yes | Total lot area in square feet | |
| max_height_ft | No | Maximum building height in feet | |
| rear_setback_ft | No | Rear setback in feet | |
| side_setback_ft | No | Side setback in feet (each side) | |
| front_setback_ft | No | Front setback in feet (distance from front property line) | |
| max_lot_coverage | No | Maximum lot coverage as decimal (e.g., 0.45 for 45%) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations. While annotations already indicate read-only, non-destructive, and idempotent operations, the description clarifies it's a 'pure calculation tool' that 'does not query data,' which helps the agent understand its computational nature. However, it doesn't mention potential computational limits or error conditions that might be relevant.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections (purpose, usage guidelines, returns, inputs, behavioral note). Every sentence adds value: the first states the core function, the 'USE WHEN:' provides actionable triggers, 'RETURNS:' outlines outputs, and the final sentence clarifies behavioral constraints. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, calculation-focused), the description provides good context: clear purpose, usage triggers, output details, and behavioral notes. With annotations covering safety (read-only, non-destructive) and no output schema, the description appropriately focuses on when to use it and what it returns. A minor gap is lack of error handling or validation details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all 7 parameters. The description mentions the parameters generically ('Takes lot area, setbacks, FAR, height limit, and coverage as inputs') but doesn't add specific meaning or usage guidance beyond what's in the schema descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('calculate the maximum buildable area') and resources ('for a lot given zoning constraints'). It distinguishes itself from siblings by specifying it's a 'pure calculation tool, does not query data' unlike data lookup tools like 'lookup_zoning' or 'check_flood_zone'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a 'USE WHEN:' section listing specific user queries ('how much can I build', 'max square footage', etc.) and scenarios ('has specific lot dimensions and zoning rules they want to model'). It also implicitly distinguishes from alternatives by stating it doesn't query data, unlike sibling tools that might provide zoning or environmental data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_environmental_risksARead-onlyIdempotentInspect
Run full environmental risk screening for a property using EPA, USGS, NOAA, and USDA data. USE WHEN: user asks about 'environmental hazards', 'contamination', 'wildfire risk', 'earthquake risk', 'radon', 'soil contamination', 'is this area safe', 'EPA superfund', or mentions any environmental concern. RETURNS: wildfire hazard zone, seismic risk zone, EPA contamination site proximity (Superfund, RCRA, brownfield), radon zone level, soil concerns, and combined risk score. Accepts a street address OR coordinates.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Full U.S. property address (preferred — gives best results) | |
| latitude | No | Property latitude (use if no address available) | |
| longitude | No | Property longitude (use if no address available) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it specifies the data sources (EPA, USGS, NOAA, USDA), the types of risks returned (wildfire, seismic, contamination, radon, soil), and that it accepts either address or coordinates. However, it doesn't mention rate limits, authentication needs, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with purpose, usage guidelines, returns, and input acceptance in separate clauses. It's front-loaded with the core purpose. Some redundancy exists (e.g., listing data sources and risk types could be slightly condensed), but overall it's efficient and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple data sources, various risk types) and lack of output schema, the description does a good job explaining what it returns (wildfire hazard zone, seismic risk, etc.). However, it doesn't detail the format of the 'combined risk score' or potential limitations (e.g., data availability, accuracy). With annotations covering safety and idempotency, it's mostly complete but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for address, latitude, and longitude. The description adds minimal semantic value beyond the schema, only noting 'Accepts a street address OR coordinates,' which is already implied by the schema. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Run full environmental risk screening for a property using EPA, USGS, NOAA, and USDA data.' It specifies the verb ('Run full environmental risk screening'), resource ('property'), and data sources, distinguishing it from sibling tools like 'check_flood_zone' or 'analyze_property' by focusing on comprehensive environmental hazards.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit 'USE WHEN:' section listing specific user queries (e.g., 'environmental hazards', 'wildfire risk', 'EPA superfund'), providing clear guidance on when to invoke this tool. It also distinguishes from siblings by focusing on environmental risks rather than flood zones, zoning, or sales analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_flood_zoneARead-onlyIdempotentInspect
Check FEMA National Flood Hazard Layer for any U.S. property. USE WHEN: user asks 'is this in a flood zone', 'do I need flood insurance', 'is this property flood-safe', 'FEMA flood map', 'is this in a 100-year flood plain', or mentions flood risk. RETURNS: FEMA zone code (X = low risk, A/AE = 100-year, V/VE = coastal high risk), flood insurance requirement (mandatory/optional), base flood elevation if applicable, and annual flood risk probability. Uses the official FEMA API.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full U.S. property address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies the data source ('official FEMA API'), details return values (FEMA zone codes, insurance requirements, base flood elevation, risk probability), and implies external API usage. Annotations cover safety (readOnlyHint=true, destructiveHint=false) and idempotency, but the description enriches this with practical behavioral insights without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, usage guidelines, returns, source) and avoids redundancy. However, it could be slightly more concise by integrating the 'USE WHEN' examples more seamlessly, but overall it is efficient and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity and lack of output schema, the description comprehensively covers purpose, usage, return values, and data source. Annotations provide safety and idempotency hints, and the description fills gaps by detailing outputs and context, making it complete for effective agent use without needing an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add parameter-specific details beyond the input schema, which has 100% coverage and clearly documents the 'address' parameter. The baseline score of 3 is appropriate as the schema fully handles parameter semantics, and the description's focus is on usage and output rather than input elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the action ('Check FEMA National Flood Hazard Layer') and the target resource ('any U.S. property'), making the purpose clear and specific. It distinguishes itself from sibling tools like 'check_environmental_risks' by focusing solely on flood zone data from FEMA.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage triggers with a 'USE WHEN:' section listing specific user queries (e.g., 'is this in a flood zone', 'do I need flood insurance'), which clearly indicates when to use this tool. It implicitly distinguishes from alternatives by not overlapping with sibling tools like 'analyze_property' or 'check_environmental_risks' in scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_buildability_scoreARead-onlyIdempotentInspect
Get a quick Buildability™ Score (0-100) for a property without running the full analysis. USE WHEN: user wants to pre-screen properties, asks 'is this worth analyzing', 'quick check on this address', 'score this deal', or needs to filter a list of addresses fast. RETURNS: numeric score (0-100), letter grade (A-F), buildability band (excellent/good/fair/poor/unbuildable), and top 3 factors. Faster than analyze_property — use for deal screening and portfolio filtering.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full U.S. property address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true), but the description adds valuable context beyond annotations: it specifies the tool is 'faster than analyze_property' and suitable for 'quick screening', which helps the agent understand performance characteristics and appropriate use cases not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with three focused sentences that each serve a distinct purpose: stating the tool's purpose, providing usage guidelines with examples, and clarifying returns and comparison to siblings. There is no wasted text, and key information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema), the description is complete: it explains the purpose, usage guidelines, behavioral context (speed), and return values (score, grade, band, factors). With annotations covering safety and idempotency, and the description adding performance and use-case details, it provides sufficient context for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'address' clearly documented as 'Full U.S. property address'. The description does not add any additional parameter semantics beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get a quick Buildability™ Score (0-100) for a property without running the full analysis.' It specifies the verb ('Get'), resource ('Buildability™ Score'), and scope ('quick' vs 'full analysis'), and distinguishes it from sibling tools like analyze_property by emphasizing speed and screening use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines with a dedicated 'USE WHEN:' section listing scenarios (e.g., 'user wants to pre-screen properties', 'quick check on this address'), and explicitly contrasts it with analyze_property ('Faster than analyze_property — use for deal screening and portfolio filtering'), clearly indicating when to use this tool versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lookup_zoningARead-onlyIdempotentInspect
Look up the zoning designation and development standards for any U.S. property address. USE WHEN: user asks 'what zone is this', 'what's the zoning', 'what can I build on this lot', 'is this residential or commercial', 'can I build a duplex here', or any question about permitted uses. RETURNS: zone code (e.g., R-1, C-2, MU-3), zone description, permitted uses list, conditional uses, overlay districts, max FAR, max height, setbacks (front/side/rear), lot coverage, and parking requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Full U.S. property address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this: it specifies the geographic scope ('U.S. property address') and details the comprehensive return data (e.g., zone code, permitted uses, setbacks, parking requirements), which helps the agent understand the tool's output behavior and limitations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage guidelines and return details. Every sentence adds value: the first states the purpose, the second provides usage examples, and the third enumerates return data. There is no wasted text, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (zoning lookup with multiple return aspects), the description is complete: it explains the purpose, usage context, and detailed return values. Annotations cover safety and idempotency, and while there's no output schema, the description thoroughly lists return data (e.g., zone code, permitted uses, setbacks), compensating adequately for the lack of structured output documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'address' documented as 'Full U.S. property address.' The description doesn't add further parameter details beyond what's in the schema, but it reinforces the scope ('U.S. property address') and implies the tool's functionality relies on this single input. Baseline score of 3 is appropriate as the schema adequately covers the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Look up the zoning designation and development standards for any U.S. property address.' It specifies the verb ('look up'), resource ('zoning designation and development standards'), and scope ('any U.S. property address'), clearly distinguishing it from sibling tools like 'check_flood_zone' or 'search_comparable_sales' that focus on different property aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes an explicit 'USE WHEN' section with multiple example queries (e.g., 'what zone is this', 'what can I build on this lot'), providing clear guidance on when to use this tool. It implicitly distinguishes from alternatives by focusing on zoning-specific questions, though it doesn't explicitly name sibling tools as alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_comparable_salesARead-onlyInspect
Find recent comparable property sales and rental comps near a property. USE WHEN: user asks 'what are comps in this area', 'recent sales near here', 'what did similar houses sell for', 'price per square foot', 'market value estimate', 'rental comps', or needs comparable sales data. RETURNS: subject property AVM, list of recent sales with price, sqft, price/sqft, distance, beds/baths, and rental comps with rent amounts. Also includes local market stats. Useful for investor deal evaluation, CMA, and market analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Full U.S. property address (preferred — gives best results) | |
| latitude | No | Center latitude for search (use if no address available) | |
| longitude | No | Center longitude for search (use if no address available) | |
| radius_miles | No | Search radius in miles (default: 0.5, max: 5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds behavioral context beyond annotations by specifying what the tool returns (e.g., 'subject property AVM, list of recent sales with price, sqft, price/sqft, distance, beds/baths, and rental comps with rent amounts, local market stats'), which is valuable since annotations only indicate it's read-only, non-destructive, open-world, and non-idempotent. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with front-loaded purpose, followed by usage guidelines and return details in a single paragraph. Every sentence adds value without redundancy, making it efficient for an AI agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (market data search with multiple outputs) and rich annotations, the description is mostly complete, detailing purpose, usage, and returns. However, without an output schema, it could benefit from more specifics on data formats or limitations, but it adequately covers core functionality for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add meaning beyond the input schema, which has 100% coverage with clear descriptions for all parameters (address, latitude, longitude, radius_miles). Since the schema fully documents parameters, the baseline score of 3 is appropriate, as the description focuses on usage and outputs instead.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('find recent comparable property sales and rental comps') and resources ('near a property'), distinguishing it from siblings like 'analyze_property' or 'check_flood_zone' by focusing on market data rather than property analysis or risk assessment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidelines with a 'USE WHEN:' section listing specific user queries (e.g., 'what are comps in this area', 'recent sales near here') and use cases ('investor deal evaluation, CMA, and market analysis'), clearly indicating when to use this tool without mentioning alternatives, which is sufficient given the distinct sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!