Skip to main content
Glama

GeoClear — US Address Intelligence

Server Details

US address intelligence: 120M+ addresses, verify, suggest, reverse geocode, coverage stats.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 8 of 8 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between 'full_risk_assessment' and other risk-focused tools like 'climate_risk_decision' and 'hmda_compliance_check', which could cause confusion about when to use each. However, the descriptions clarify that 'full_risk_assessment' is a comprehensive bundle, reducing ambiguity.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern and generally use clear verb_noun structures, such as 'verify_address' and 'suggest_address'. Minor deviations include 'get_coverage' using 'get' instead of a more descriptive verb like 'list', but overall naming is predictable and readable.

Tool Count5/5

With 8 tools, the count is well-scoped for the server's purpose of US address intelligence. Each tool serves a specific function, from risk assessments to geocoding, without redundancy, making the set manageable and focused.

Completeness5/5

The tool set provides comprehensive coverage for US address intelligence, including risk assessments, compliance checks, geocoding, address verification, and coverage statistics. There are no obvious gaps; agents can perform all core operations in this domain without dead ends.

Available Tools

8 tools
climate_risk_decisionA
Read-only
Inspect

Decision-grade climate risk for a US address. Returns flood zone, wildfire severity, storm exposure, earthquake and drought risk, composite climate score, and a recommended_action. Replaces $2–4/call third-party climate data (First Street, ZestyAI). x402 price: $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, and the description aligns with this by describing a risk assessment (not a write operation). The description adds valuable behavioral context beyond annotations: pricing information ('x402 price: $0.02'), cost comparison ('replaces $2–4/call third-party climate data'), and the specific outputs returned. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly efficient and front-loaded: the first sentence states the core purpose and outputs, the second adds pricing and replacement context. Every sentence provides essential information with zero wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters, read-only annotations, and no output schema, the description provides excellent context about what it does, what it returns, and cost implications. The main gap is that without an output schema, the description doesn't detail the format/structure of the returned data (though it lists the components).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately explains that the tool takes 'a US address' as implicit input, which is helpful semantic context since the schema shows no parameters. It doesn't need to document non-existent parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: providing 'decision-grade climate risk for a US address' with specific outputs listed (flood zone, wildfire severity, storm exposure, earthquake and drought risk, composite climate score, and recommended_action). It distinguishes from siblings by specifying the exact risk assessment scope rather than general compliance or geocoding functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('for a US address') and mentions it replaces specific third-party services (First Street, ZestyAI), which implies an alternative context. However, it doesn't explicitly state when NOT to use it or directly compare to sibling tools like 'full_risk_assessment' or 'hmda_compliance_check'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drone_deliverabilityA
Read-only
Inspect

FAA airspace classification and drone delivery decision for a US address. Returns airspace class, LAANC eligibility, authorized altitude, building presence, and a recommended_action. Replaces human dispatch review. x402 price: $0.05.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, and the description does not contradict this (it describes a decision/classification tool, not a write operation). The description adds valuable behavioral context beyond annotations, such as the tool's role in replacing human review and the pricing information ('x402 price: $0.05'), which are not covered by annotations. However, it does not detail rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional details (returned data, replacement of human review, pricing). Every sentence adds value without redundancy, and it is appropriately sized for a tool with no input parameters, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (decision-making based on FAA regulations) and the absence of an output schema, the description does a good job explaining what is returned (airspace class, LAANC eligibility, etc.) and the tool's purpose. However, it could provide more detail on output structure or error cases to be fully complete, though the annotations help by indicating it's read-only.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the schema fully documents the lack of inputs. The description does not need to compensate for missing param info, and it appropriately does not discuss parameters. It adds context about the input ('for a US address'), which is useful but not strictly necessary given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('FAA airspace classification and drone delivery decision') and resources ('for a US address'), and distinguishes it from siblings by focusing on drone deliverability rather than climate risk, compliance checks, or address operations. It explicitly lists the returned information (airspace class, LAANC eligibility, etc.), making the purpose highly specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('for a US address' and 'Replaces human dispatch review'), but it does not explicitly state when not to use it or name alternatives among the sibling tools. The context is sufficient to guide usage, though it lacks exclusion criteria or direct sibling comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

full_risk_assessmentA
Read-only
Inspect

All risk dimensions for a US address in one call: flood zone, wildfire, storm, earthquake, drought, drone airspace, HMDA/flood determination, address confidence, and composite scores. Replaces multiple third-party API calls. x402 price: $0.10.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by disclosing pricing ('x402 price: $0.10') and that it replaces third-party API calls, which are useful behavioral traits beyond annotations. However, it lacks details on rate limits, error handling, or response format, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key details (risk types, consolidation benefit, pricing). Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple risk dimensions) and lack of output schema, the description does well by listing risk types and pricing. However, it could improve by briefly mentioning the return format or data structure, as annotations only cover read-only status, leaving some uncertainty about output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameters need documentation. The description appropriately does not discuss parameters, maintaining focus on the tool's purpose and usage, which is sufficient given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it provides 'All risk dimensions for a US address in one call' and lists specific risk types (flood zone, wildfire, etc.). It distinguishes from siblings by emphasizing it 'Replaces multiple third-party API calls,' suggesting it consolidates functionality that might otherwise require separate tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool: for comprehensive risk assessment of a US address, especially to avoid multiple API calls. However, it does not explicitly state when not to use it or name specific alternatives among siblings (e.g., climate_risk_decision or drone_deliverability), though the consolidation hint provides some guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_coverageB
Read-only
Inspect

Get GeoClear address coverage statistics for US states. Returns address count, Census housing unit count, and coverage tier for each state.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, and the description doesn't contradict this. The description adds useful context about what data is returned (address count, Census housing unit count, coverage tier) which isn't in annotations. However, it doesn't disclose behavioral traits like rate limits, authentication needs, or data freshness that would be helpful beyond the basic read-only annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise: two sentences that efficiently state what the tool does and what it returns. Every word earns its place with no redundancy or unnecessary elaboration. It's front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters, read-only annotations, but no output schema, the description provides adequate coverage of what the tool returns. However, for a data retrieval tool, it could benefit from mentioning data format, time period covered, or whether results are cached/real-time to be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description appropriately doesn't add parameter information, which is correct for a parameterless tool. It focuses on output semantics instead.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get GeoClear address coverage statistics for US states' with specific outputs (address count, Census housing unit count, coverage tier). It distinguishes from siblings by focusing on coverage statistics rather than risk assessment, geocoding, or verification. However, it doesn't explicitly differentiate from all siblings by name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when this coverage data is needed versus other tools like 'full_risk_assessment' or 'verify_address', nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hmda_compliance_checkA
Read-only
Inspect

HMDA/CRA compliance bundle for a US address. Returns census tract, county FIPS, flood determination, MSA indicator, and HMDA-ready enrichment fields in one call. Replaces $3–15 manual flood determinations. x402 price: $0.05.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable context beyond this: it discloses cost ('x402 price: $0.05'), efficiency gains ('replaces $3–15 manual flood determinations'), and that it returns multiple enrichment fields in one call. No contradictions with annotations; it enhances understanding of operational and economic aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and key returned fields, followed by efficiency and cost details. It's concise (three sentences) with minimal waste, though the pricing note ('x402 price: $0.05') is somewhat cryptic and could be clarified for better accessibility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (compliance bundle with multiple outputs), no output schema, and annotations only covering read-only status, the description provides good purpose and behavioral context but lacks details on output format (e.g., structure of returned data) and error handling. It's adequate but has gaps for full agent usability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's purpose and output. This aligns with the baseline for zero parameters, though it could briefly note the lack of inputs for clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'HMDA/CRA compliance bundle for a US address' and lists specific returned data fields (census tract, county FIPS, flood determination, etc.). It specifies the resource (US address) and outcome (compliance bundle with enrichment fields), though it doesn't explicitly differentiate from sibling tools like 'climate_risk_decision' or 'full_risk_assessment' beyond the HMDA/CRA focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for HMDA/CRA compliance needs and mentions it 'replaces $3–15 manual flood determinations,' suggesting efficiency benefits. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., 'verify_address' or 'full_risk_assessment') or any exclusions (e.g., non-US addresses). The context is clear but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse_geocodeA
Read-only
Inspect

Look up the nearest US address for a lat/lon coordinate. Returns address + enrichment fields.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about the return value ('address + enrichment fields'), which goes beyond annotations, but does not disclose other behavioral traits like rate limits, error conditions, or specific enrichment details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence with zero waste—it states the action, input, output, and scope efficiently. Every word earns its place, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only, no output schema), the description is largely complete. It covers purpose, input, and output. However, it could be more complete by specifying what 'enrichment fields' include or any limitations (e.g., US-only), though annotations help with safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the input structure (none). The description adds value by specifying the required input ('lat/lon coordinate') and clarifying the tool's purpose, compensating for the lack of explicit parameters in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Look up'), resource ('nearest US address'), and input ('lat/lon coordinate'), distinguishing it from siblings like 'suggest_address' or 'verify_address' which handle different address-related tasks. It provides a complete picture of what the tool does in a single sentence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage for converting geographic coordinates to addresses, but does not explicitly state when to use this tool versus alternatives like 'suggest_address' or 'verify_address'. It provides clear context (US addresses, lat/lon input) but lacks explicit exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_addressA
Read-only
Inspect

Auto-complete a partial US address. Returns up to 10 matching addresses for user selection.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, and the description adds valuable behavioral context: it specifies the scope ('US address'), the return limit ('up to 10 matching addresses'), and the purpose ('for user selection'), which goes beyond the annotations to clarify usage and output behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first clause, followed by return details, using two concise sentences with no wasted words, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only), the description is mostly complete, covering purpose, scope, and output. However, without an output schema, it could benefit from more detail on the return format (e.g., structure of addresses), though the mention of 'matching addresses' provides basic context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description compensates by explaining the input expectation ('partial US address'), which adds semantic meaning beyond the empty schema, ensuring clarity on what the tool expects.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Auto-complete a partial US address') and resource ('address'), distinguishing it from siblings like verify_address or reverse_geocode by focusing on auto-completion rather than validation or geocoding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('partial US address') and the return format ('up to 10 matching addresses for user selection'), but does not explicitly state when to use this tool versus alternatives like verify_address or reverse_geocode, which could help differentiate further.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_addressB
Read-only
Inspect

Verify and enrich a US address. Returns geo-coordinates, flood zone, risk scores, census tract, and 15+ enrichment fields.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the return data (geo-coordinates, flood zone, risk scores, etc.), which isn't covered by annotations. However, it doesn't disclose other behavioral traits like rate limits, authentication needs, or error handling. With annotations providing basic safety info, a 3 is appropriate for adding some context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: the first sentence states the core purpose, and the second sentence details the return values. Every sentence adds essential information with zero waste, making it easy for an AI agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (address verification with multiple enrichment fields), no output schema, and annotations only covering read-only status, the description is moderately complete. It specifies the return data but doesn't explain format, error cases, or limitations (e.g., US-only scope). For a tool with rich output but no output schema, more detail on return structure would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, meaning no parameters are documented in the schema. The description doesn't mention any parameters, which is appropriate here since none exist. It earns a 4 because it doesn't need to compensate for missing param info, but loses a point for not explicitly stating 'no parameters required' to avoid ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Verify and enrich a US address.' It specifies the verb ('verify and enrich'), resource ('US address'), and scope (US only). However, it doesn't explicitly differentiate from sibling tools like 'suggest_address' or 'reverse_geocode,' which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'suggest_address' (likely for address suggestions), 'reverse_geocode' (likely for coordinates to address), and 'full_risk_assessment' (likely broader risk analysis), there's no indication of when this specific verification/enrichment tool is appropriate or what prerequisites might exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources