AcreLens
Server Details
US land due-diligence MCP server. Returns structured reports covering solar potential, groundwater depth, flood zones, building codes, and county regulations for any US property address. Mode-aware across off-grid, rural residential, recreational, and investment use cases. 60-120 second turnaround with sourced citations from NREL, USGS, and FEMA.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 5 of 5 tools scored.
Each tool has a clear, distinct purpose: full analysis, comparison, quick scoring, solar estimation, and state-level data. No overlap or ambiguity among the five tools.
All tool names follow a consistent verb_noun snake_case pattern (e.g., analyze_land, get_state_land_profile), making it easy to predict tool functionality from the name.
Five tools cover the core land analysis workflows without being excessive or insufficient. The number matches the server's focused scope perfectly.
The tool set covers the main use cases: individual analysis, comparison, quick screening, solar potential, and state context. Minor gaps exist (e.g., dedicated water access or climate tools), but these are partially addressed by the state profile tool.
Available Tools
5 toolsanalyze_landAInspect
Generates a comprehensive land analysis report for a US property through one of four analytical lenses: off_grid, rural_residential, recreational, or investment. Call this when the user asks for a full analysis of a specific property. If the user's intent is unclear, ask which mode to use before calling. Returns a report ID and poll URL — the final structured report (scores, confidence ratings, narrative summary, source citations) is delivered asynchronously via polling or webhook.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | Latitude (skip geocoding if provided). | |
| lng | No | Longitude (skip geocoding if provided). | |
| mode | Yes | Analysis lens: off_grid, rural_residential, recreational, or investment. | |
| state | Yes | 2-letter US state code (e.g. "NM"). | |
| county | No | County name (recommended for better regulation research). | |
| acreage | No | Total acreage of the parcel. | |
| address | Yes | Full street address of the US property (e.g. "123 Cabin Rd, Taos, NM"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| mode | Yes | The analysis lens that was applied (echoed from the request). |
| status | Yes | Report status. Initially "authorized" or "processing"; transitions to "completed" or "failed" once analysis finishes. |
| poll_url | Yes | Absolute URL to GET the report. Returns 202 while processing, 200 with full body once completed. |
| report_id | Yes | Unique ID for the report. Use this with the poll URL to retrieve the final structured report. |
| estimated_completion_seconds | Yes | Approximate seconds until the report is ready. Use as a hint for when to first poll. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already show readOnlyHint=false and openWorldHint=true. Description adds key async behavior: returns report ID and poll URL with final report delivered via polling or webhook, which is crucial for agent planning.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences are front-loaded with purpose, usage, and async behavior. No redundant information; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Describes asynchronous delivery and report components. Lacks guidance on optional parameters (county, acreage) but schema covers them. Adequate for a tool with output schema and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, schema already explains parameters. Description reinforces mode importance but adds minimal new meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'generates' and resource 'comprehensive land analysis report' with four specific modes, distinguishing it from sibling tools like compare_properties or get_land_quick_score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to call ('when user asks for a full analysis') and what to do if intent is unclear ('ask which mode'). Does not list alternatives but sibling tool names imply other choices.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_propertiesAInspect
Compare 2–5 US properties side by side using the same analysis mode. Call this when the user is evaluating multiple parcels or listings and wants a comparative view. Returns a comparison table with scores, highlights, and recommendations per property.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | Analysis lens to apply to every property. | |
| properties | Yes | Array of 2–5 properties to compare. |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | Yes | Batch status. Initially "processing"; transitions to "completed" once all per-property reports terminate. |
| batch_id | Yes | Unique batch ID grouping the report jobs created by this call. |
| report_ids | Yes | Per-property report IDs in the same order as the input properties array. Poll each individually or wait for the batch.completed webhook. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates the tool returns a comparison table with scores, highlights, and recommendations. Annotations mark it as non-readonly but with no destructive hint. The description does not disclose whether state changes occur, but given the open world hint and output format, it is likely safe. A 4 is appropriate as it adds useful context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two concise, front-loaded sentences. Every sentence adds value: first defines the action and constraints, second provides usage guidance and output expectation. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity and the presence of an output schema, the description covers the key aspects: purpose, input constraints (2-5 properties, same mode), usage context, and output nature (comparison table with scores, highlights, recommendations). No major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional parameter-specific details (e.g., explaining mode options or property format), making it adequate but not exceptional. Baseline 3 is suitable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Compare') and resource ('US properties'), clearly states the range (2–5) and the context ('same analysis mode'), and distinguishes it from siblings like 'analyze_land' which handles single properties.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to call this tool: 'when the user is evaluating multiple parcels or listings and wants a comparative view.' This provides clear usage context and implies alternatives (single-property tools) without needing further elaboration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_land_quick_scoreAInspect
Get a fast suitability score (0-100) for a US property without generating a full report. Call this when the user wants a quick go/no-go assessment or an initial screening before committing to a full analysis. Returns a single score with confidence level and one-sentence rationale.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | Analysis lens: off_grid, rural_residential, recreational, or investment. | |
| state | Yes | 2-letter US state code (e.g. "NM"). | |
| address | Yes | Full street address of the US property. |
Output Schema
| Name | Required | Description |
|---|---|---|
| score | Yes | Overall suitability score 0-100. Null while still processing. |
| status | Yes | "completed" when the score is ready; "processing" if the poll timed out and the caller should retry the report later. |
| summary | Yes | One-sentence rationale for the score. Null while still processing or if no summary was generated. |
| report_id | Yes | Unique ID for the underlying quick-mode report. |
| confidence | Yes | Aggregate confidence level derived from per-category confidences. Null while still processing. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are present (readOnlyHint=false, openWorldHint=true) and description adds context about returning a single score with confidence and rationale. No contradictions. However, description could be more explicit about any side effects given openWorldHint=true.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with two sentences, front-loading the purpose and usage guidance without unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (though not shown), the description sufficiently explains the return structure (score, confidence, rationale) and combined with good annotations, provides a complete picture for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage with clear parameter descriptions. The tool description adds little beyond what the schema already provides, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it provides a fast suitability score (0-100) for a US property without generating a full report, distinguishing it from sibling tools like analyze_land.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells the agent to use this when a quick go/no-go or initial screening is needed before committing to a full analysis, providing clear context for when to invoke it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_solar_potentialARead-onlyInspect
Estimate solar energy production potential for a US address using NREL PVWatts data. Call this when the user asks about solar power viability, off-grid energy, or panel sizing. Returns estimated annual production, system sizing recommendation, and cost bracket.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | No | Latitude of the location. | |
| lng | No | Longitude of the location. | |
| address | No | US street address (used for geocoding fallback). | |
| system_size_kw | No | System size in kilowatts (default 5). |
Output Schema
| Name | Required | Description |
|---|---|---|
| latitude | Yes | Latitude used for the calculation. Will be the resolved geocoded value if address was provided instead of lat/lng. |
| longitude | Yes | Longitude used for the calculation. Will be the resolved geocoded value if address was provided instead of lat/lng. |
| annual_kwh | Yes | Estimated annual AC electricity production in kilowatt-hours, from NREL PVWatts. |
| cost_bracket | Yes | Typical installed-cost range (USD) for a system of this size. Indicative only; varies by region and installer. |
| system_size_kw | Yes | System nameplate capacity in kilowatts (echoed from the request, default 5). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and openWorldHint=true. The description adds useful context: it uses NREL PVWatts data, returns estimated annual production, system sizing recommendation, and cost bracket. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences: purpose and data source, usage cue, and return values. No wasted words, information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists and all parameters are documented in the input schema, the description covers the tool's purpose, usage context, and key outputs comprehensively. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all four parameters sufficiently. The description adds minimal additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool estimates solar energy production potential for a US address using NREL PVWatts data, distinguishing it from sibling tools like analyze_land or compare_properties which focus on general land analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to call this tool when the user asks about solar power viability, off-grid energy, or panel sizing. However, it does not provide when-not-to-use guidance or mention sibling alternatives directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_state_land_profileARead-onlyInspect
Retrieve state-level land intelligence data covering regulation, climate, solar potential, water access, and building codes. Call this when the user wants general context about a US state before drilling into a specific property. Returns structured multi-mode profiles.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Optional mode filter. If omitted, returns all 4 modes. | |
| state_code | Yes | 2-letter US state code (e.g. "NM"). |
Output Schema
| Name | Required | Description |
|---|---|---|
| modes | Yes | Per-mode profile data, keyed by mode name (off_grid, rural_residential, recreational, investment). Contains the requested mode if a filter was provided, otherwise all four. |
| state_code | Yes | 2-letter US state code echoed from the request. |
| shared_facts | No | Cross-mode state-level facts (statute citations, agency names, etc.) that apply across all modes. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint and openWorldHint. The description adds value beyond these by detailing the data domains covered (regulation, climate, etc.) and noting the return of 'structured multi-mode profiles.' No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded and free of unnecessary words. Every sentence contributes a distinct piece of information: what it does, when to use it, and what it returns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description sufficiently covers purpose, usage context, and data domains. It doesn't need to detail return structure as that is handled by the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with both parameters described. The description adds that it returns 'structured multi-mode profiles,' which vaguely relates to the mode parameter, but does not provide significant new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the verb 'Retrieve' with the resource 'state-level land intelligence data' and lists specific domains (regulation, climate, solar potential, water access, building codes). It clearly distinguishes from sibling tools by stating it provides 'general context about a US state before drilling into a specific property.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to call the tool: 'when the user wants general context about a US state before drilling into a specific property.' This implies exclusion of property-specific queries, but does not name alternative sibling tools directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!