Skip to main content
Glama

Server Details

US land due-diligence MCP server. Returns structured reports covering solar potential, groundwater depth, flood zones, building codes, and county regulations for any US property address. Mode-aware across off-grid, rural residential, recreational, and investment use cases. 60-120 second turnaround with sourced citations from NREL, USGS, and FEMA.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose: full analysis, comparison, quick scoring, solar estimation, and state-level data. No overlap or ambiguity among the five tools.

Naming Consistency5/5

All tool names follow a consistent verb_noun snake_case pattern (e.g., analyze_land, get_state_land_profile), making it easy to predict tool functionality from the name.

Tool Count5/5

Five tools cover the core land analysis workflows without being excessive or insufficient. The number matches the server's focused scope perfectly.

Completeness4/5

The tool set covers the main use cases: individual analysis, comparison, quick screening, solar potential, and state context. Minor gaps exist (e.g., dedicated water access or climate tools), but these are partially addressed by the state profile tool.

Available Tools

5 tools
analyze_landAInspect

Generates a comprehensive land analysis report for a US property through one of four analytical lenses: off_grid, rural_residential, recreational, or investment. Call this when the user asks for a full analysis of a specific property. If the user's intent is unclear, ask which mode to use before calling. Returns a report ID and poll URL — the final structured report (scores, confidence ratings, narrative summary, source citations) is delivered asynchronously via polling or webhook.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude (skip geocoding if provided).
lngNoLongitude (skip geocoding if provided).
modeYesAnalysis lens: off_grid, rural_residential, recreational, or investment.
stateYes2-letter US state code (e.g. "NM").
countyNoCounty name (recommended for better regulation research).
acreageNoTotal acreage of the parcel.
addressYesFull street address of the US property (e.g. "123 Cabin Rd, Taos, NM").

Output Schema

ParametersJSON Schema
NameRequiredDescription
modeYesThe analysis lens that was applied (echoed from the request).
statusYesReport status. Initially "authorized" or "processing"; transitions to "completed" or "failed" once analysis finishes.
poll_urlYesAbsolute URL to GET the report. Returns 202 while processing, 200 with full body once completed.
report_idYesUnique ID for the report. Use this with the poll URL to retrieve the final structured report.
estimated_completion_secondsYesApproximate seconds until the report is ready. Use as a hint for when to first poll.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already show readOnlyHint=false and openWorldHint=true. Description adds key async behavior: returns report ID and poll URL with final report delivered via polling or webhook, which is crucial for agent planning.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences are front-loaded with purpose, usage, and async behavior. No redundant information; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes asynchronous delivery and report components. Lacks guidance on optional parameters (county, acreage) but schema covers them. Adequate for a tool with output schema and annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, schema already explains parameters. Description reinforces mode importance but adds minimal new meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'generates' and resource 'comprehensive land analysis report' with four specific modes, distinguishing it from sibling tools like compare_properties or get_land_quick_score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to call ('when user asks for a full analysis') and what to do if intent is unclear ('ask which mode'). Does not list alternatives but sibling tool names imply other choices.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_propertiesAInspect

Compare 2–5 US properties side by side using the same analysis mode. Call this when the user is evaluating multiple parcels or listings and wants a comparative view. Returns a comparison table with scores, highlights, and recommendations per property.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesAnalysis lens to apply to every property.
propertiesYesArray of 2–5 properties to compare.

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusYesBatch status. Initially "processing"; transitions to "completed" once all per-property reports terminate.
batch_idYesUnique batch ID grouping the report jobs created by this call.
report_idsYesPer-property report IDs in the same order as the input properties array. Poll each individually or wait for the batch.completed webhook.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates the tool returns a comparison table with scores, highlights, and recommendations. Annotations mark it as non-readonly but with no destructive hint. The description does not disclose whether state changes occur, but given the open world hint and output format, it is likely safe. A 4 is appropriate as it adds useful context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise, front-loaded sentences. Every sentence adds value: first defines the action and constraints, second provides usage guidance and output expectation. No extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity and the presence of an output schema, the description covers the key aspects: purpose, input constraints (2-5 properties, same mode), usage context, and output nature (comparison table with scores, highlights, recommendations). No major gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional parameter-specific details (e.g., explaining mode options or property format), making it adequate but not exceptional. Baseline 3 is suitable.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Compare') and resource ('US properties'), clearly states the range (2–5) and the context ('same analysis mode'), and distinguishes it from siblings like 'analyze_land' which handles single properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to call this tool: 'when the user is evaluating multiple parcels or listings and wants a comparative view.' This provides clear usage context and implies alternatives (single-property tools) without needing further elaboration.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_land_quick_scoreAInspect

Get a fast suitability score (0-100) for a US property without generating a full report. Call this when the user wants a quick go/no-go assessment or an initial screening before committing to a full analysis. Returns a single score with confidence level and one-sentence rationale.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesAnalysis lens: off_grid, rural_residential, recreational, or investment.
stateYes2-letter US state code (e.g. "NM").
addressYesFull street address of the US property.

Output Schema

ParametersJSON Schema
NameRequiredDescription
scoreYesOverall suitability score 0-100. Null while still processing.
statusYes"completed" when the score is ready; "processing" if the poll timed out and the caller should retry the report later.
summaryYesOne-sentence rationale for the score. Null while still processing or if no summary was generated.
report_idYesUnique ID for the underlying quick-mode report.
confidenceYesAggregate confidence level derived from per-category confidences. Null while still processing.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are present (readOnlyHint=false, openWorldHint=true) and description adds context about returning a single score with confidence and rationale. No contradictions. However, description could be more explicit about any side effects given openWorldHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise with two sentences, front-loading the purpose and usage guidance without unnecessary detail.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists (though not shown), the description sufficiently explains the return structure (score, confidence, rationale) and combined with good annotations, provides a complete picture for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear parameter descriptions. The tool description adds little beyond what the schema already provides, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it provides a fast suitability score (0-100) for a US property without generating a full report, distinguishing it from sibling tools like analyze_land.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells the agent to use this when a quick go/no-go or initial screening is needed before committing to a full analysis, providing clear context for when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_solar_potentialA
Read-only
Inspect

Estimate solar energy production potential for a US address using NREL PVWatts data. Call this when the user asks about solar power viability, off-grid energy, or panel sizing. Returns estimated annual production, system sizing recommendation, and cost bracket.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNoLatitude of the location.
lngNoLongitude of the location.
addressNoUS street address (used for geocoding fallback).
system_size_kwNoSystem size in kilowatts (default 5).

Output Schema

ParametersJSON Schema
NameRequiredDescription
latitudeYesLatitude used for the calculation. Will be the resolved geocoded value if address was provided instead of lat/lng.
longitudeYesLongitude used for the calculation. Will be the resolved geocoded value if address was provided instead of lat/lng.
annual_kwhYesEstimated annual AC electricity production in kilowatt-hours, from NREL PVWatts.
cost_bracketYesTypical installed-cost range (USD) for a system of this size. Indicative only; varies by region and installer.
system_size_kwYesSystem nameplate capacity in kilowatts (echoed from the request, default 5).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=true. The description adds useful context: it uses NREL PVWatts data, returns estimated annual production, system sizing recommendation, and cost bracket. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose and data source, usage cue, and return values. No wasted words, information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists and all parameters are documented in the input schema, the description covers the tool's purpose, usage context, and key outputs comprehensively. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all four parameters sufficiently. The description adds minimal additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool estimates solar energy production potential for a US address using NREL PVWatts data, distinguishing it from sibling tools like analyze_land or compare_properties which focus on general land analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to call this tool when the user asks about solar power viability, off-grid energy, or panel sizing. However, it does not provide when-not-to-use guidance or mention sibling alternatives directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_state_land_profileA
Read-only
Inspect

Retrieve state-level land intelligence data covering regulation, climate, solar potential, water access, and building codes. Call this when the user wants general context about a US state before drilling into a specific property. Returns structured multi-mode profiles.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoOptional mode filter. If omitted, returns all 4 modes.
state_codeYes2-letter US state code (e.g. "NM").

Output Schema

ParametersJSON Schema
NameRequiredDescription
modesYesPer-mode profile data, keyed by mode name (off_grid, rural_residential, recreational, investment). Contains the requested mode if a filter was provided, otherwise all four.
state_codeYes2-letter US state code echoed from the request.
shared_factsNoCross-mode state-level facts (statute citations, agency names, etc.) that apply across all modes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint and openWorldHint. The description adds value beyond these by detailing the data domains covered (regulation, climate, etc.) and noting the return of 'structured multi-mode profiles.' No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are front-loaded and free of unnecessary words. Every sentence contributes a distinct piece of information: what it does, when to use it, and what it returns.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema, the description sufficiently covers purpose, usage context, and data domains. It doesn't need to detail return structure as that is handled by the output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with both parameters described. The description adds that it returns 'structured multi-mode profiles,' which vaguely relates to the mode parameter, but does not provide significant new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the verb 'Retrieve' with the resource 'state-level land intelligence data' and lists specific domains (regulation, climate, solar potential, water access, building codes). It clearly distinguishes from sibling tools by stating it provides 'general context about a US state before drilling into a specific property.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to call the tool: 'when the user wants general context about a US state before drilling into a specific property.' This implies exclusion of property-specific queries, but does not name alternative sibling tools directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources