Server Details
UK land and woodland management — hedgerow regs, felling licences, SSSI, rights of way
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Ansvar-Systems/uk-land-woodland-mcp
- GitHub Stars
- 0
See and control every tool call
Documentation Quality
Average 3.9/5 across 11 of 11 tools scored.
Each tool targets a distinct UK regulatory domain (hedgerows, SSSI, common land, TPOs, etc.) with no overlapping functionality. The search tool is clearly differentiated as a cross-cutting query mechanism versus the specific rule retrieval tools.
Mixed verb patterns: 'check_' is used for hedgerows, SSSI, and data freshness while 'get_' is used for common land, felling, planting, rights of way, and TPOs—despite all retrieving regulatory guidance. 'about' and 'list_sources' follow different patterns entirely.
Eleven tools is well-scoped for UK land and woodland regulation coverage. Each tool represents a distinct legal framework (Forestry Act, Hedgerow Regulations, Commons Act, etc.) with no redundancy, plus operational utilities (about, sources, freshness, search).
Covers major UK land/woodland regulatory regimes (felling licences, TPOs, SSSI, hedgerows, common land, planting grants) comprehensively. Minor gaps exist for wildlife licensing (bats, protected species) and woodland carbon codes, though the search tool may partially cover these.
A coherent, well-scoped reference server for UK land regulations with clearly distinct tools per legal domain. The primary weakness is inconsistent verb choice between 'check' and 'get' for similar rule-retrieval operations.
Available Tools
11 toolsaboutAInspect
Get server metadata: name, version, coverage, data sources, and links.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It successfully enumerates the returned fields (name, version, coverage, data sources, links) which is critical given the lack of an output schema. However, it omits other behavioral traits like caching, rate limits, or whether this requires authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action, and enumerates the specific return values without waste. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description appropriately compensates by listing the specific metadata fields returned. For a zero-parameter metadata tool, this is sufficiently complete, though it could optionally mention the response format (JSON vs text).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters. Per the scoring guidelines, 0 parameters establishes a baseline score of 4. The description appropriately does not invent parameter semantics where none exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and clearly identifies the resource ('server metadata'). It effectively distinguishes from siblings—all other tools retrieve specific land/conservation domain data (hedgerows, TPOs, etc.), while this retrieves infrastructure metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the content (server info vs. domain rules), but provides no explicit when-to-use guidance or prerequisites. It doesn't indicate, for example, whether to call this first to discover available data sources before using the domain-specific tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_data_freshnessAInspect
Check when data was last ingested, staleness status, and how to trigger a refresh.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the three categories of information returned (last ingestion timestamp, staleness status, and refresh instructions) but omits details about performance characteristics, caching behavior, or what constitutes 'staleness' thresholds.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence that front-loads the action and covers three distinct informational aspects without redundancy. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter metadata tool without an output schema, the description adequately explains what the agent will learn (freshness status and refresh options). It could be improved by specifying which dataset's freshness is being checked, but remains sufficient for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to the scoring rubric, zero parameters establishes a baseline score of 4, as there are no parameter semantics to describe beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Check') and identifies the resource ('data') along with three specific aspects examined (ingestion time, staleness, refresh method). While 'data' is generic, it effectively distinguishes from siblings which focus on specific land rules (hedgerow, TPO, etc.) rather than metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a workflow by mentioning 'how to trigger a refresh,' suggesting use when verifying data currency before operations. However, it lacks explicit when-to-use guidance or differentiation from list_sources, which might also provide metadata.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_hedgerow_rulesAInspect
Check hedgerow regulations by action type. Returns notice requirements, exemptions, important hedgerow criteria, and penalties under the Hedgerow Regulations 1997.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Action type (e.g. remove, trim, lay, coppice, replace) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| hedgerow_type | No | Hedgerow classification (e.g. important, standard) |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden and effectively discloses return values (notice requirements, exemptions, criteria, penalties) despite lacking output schema. Cites specific legal framework establishing authority.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first defines operation and primary input dimension, second specifies return data. Zero redundancy, perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for domain-specific tool: compensates for missing output schema by detailing return contents, leverages full input schema coverage, and establishes legal context (1997 Regulations).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (action, jurisdiction, hedgerow_type all documented). Description adds 'by action type' highlighting the required parameter, but provides minimal semantic enhancement beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Check' + resource 'hedgerow regulations' + scope 'Hedgerow Regulations 1997' clearly distinguishes from siblings like get_felling_licence_rules (trees) and search_land_rules (general).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through specific domain citation ('Hedgerow Regulations 1997') and 'by action type', but lacks explicit guidance on when to prefer this over search_land_rules or other specific rule tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_sssi_consentAInspect
Check whether an activity on a Site of Special Scientific Interest requires Natural England consent. Returns process, typical conditions, and penalties.
| Name | Required | Description | Default |
|---|---|---|---|
| activity | Yes | Proposed activity (e.g. grazing, drainage, fertiliser, planting, burning, construction) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It compensates partially by disclosing return content ('process, typical conditions, and penalties'), which is valuable given the missing output schema. However, it fails to clarify operational traits like whether this triggers workflows, requires authentication, or is strictly read-only despite mentioning 'penalties'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence front-loads the core purpose; second sentence compensates for missing output schema by describing return values. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 2-parameter tool. Description effectively substitutes for missing output schema by detailing return content (process, conditions, penalties). Minor gap: could explicitly state this is England-specific (Natural England) despite the GB jurisdiction default, or clarify that this is informational lookup rather than consent submission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with clear examples for 'activity' and format guidance for 'jurisdiction'. Description adds no additional parameter semantics, but baseline 3 is appropriate since the schema already fully documents both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with clear resource 'Site of Special Scientific Interest' and authority 'Natural England'. Clearly distinguishes from siblings like check_hedgerow_rules and get_felling_licence_rules by focusing specifically on SSSI consent requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context by specifying the SSSI domain and Natural England jurisdiction, but lacks explicit guidance on when to use alternatives (e.g., check_hedgerow_rules for non-SSSI boundaries) or prerequisites. Does not clarify whether this is for pre-application checks versus actual consent applications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_common_land_rulesAInspect
Get rules for activities on common land. Returns consent requirements and responsible authority under the Commons Act 2006.
| Name | Required | Description | Default |
|---|---|---|---|
| activity | No | Proposed activity (e.g. fencing, building, vehicles) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It compensates by disclosing return values ('consent requirements and responsible authority') and legal framework (Commons Act 2006), but omits operational details like caching, rate limits, or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. Front-loaded with action verb, second sentence justifies value proposition (what gets returned). Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with no output schema, the description appropriately compensates by describing return values. However, it could note that both parameters are optional (though schema shows this) or provide example use cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description adds no parameter-specific guidance beyond the schema (e.g., doesn't clarify valid activity values or jurisdiction defaults), but doesn't need to given complete schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'rules for activities on common land' and explicitly cites the 'Commons Act 2006', distinguishing it from sibling tools like search_land_rules (general) and check_hedgerow_rules (different domain).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'Returns consent requirements' (use when consent info needed), but lacks explicit comparison to siblings like search_land_rules or check_sssi_consent, and provides no 'when-not-to-use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_felling_licence_rulesAInspect
Get tree felling licence requirements by volume, area, or reason. Returns whether a licence is needed, exemptions, application process, and penalties under the Forestry Act 1967.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Reason for felling (e.g. dangerous, planning, garden, fruit) | |
| area_ha | No | Area of woodland in hectares | |
| volume_m3 | No | Volume of timber to fell in cubic metres | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the return values (licence requirement, exemptions, application process, penalties) and cites the legal framework (Forestry Act 1967). However, it omits details about data freshness, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence that is front-loaded with the action and resource. Every clause earns its place: the first half defines the input criteria, the second half defines the return payload. No redundancy or filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by detailing the return structure (licence needed, exemptions, process, penalties). It covers the legal context (Forestry Act 1967) but could improve by mentioning the jurisdiction parameter's default value and whether all parameters are optional.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions three parameters conceptually ('by volume, area, or reason') but omits the fourth parameter (jurisdiction) and its default value (GB). It adds no syntax details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb-resource combination ('Get tree felling licence requirements') and clearly distinguishes from siblings like check_hedgerow_rules, get_tpo_rules, and check_sssi_consent by specifying 'felling' and 'Forestry Act 1967'. It also clarifies the scope (by volume, area, or reason).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the domain is clearly implied by the specific focus on forestry felling licences, the description lacks explicit guidance on when to use this tool versus similar tree-related siblings like get_tpo_rules (Tree Preservation Orders) or check_hedgerow_rules. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_planting_guidanceAInspect
Get woodland planting guidance including grants (EWCO), EIA screening thresholds, ancient woodland buffers, and species recommendations.
| Name | Required | Description | Default |
|---|---|---|---|
| area_ha | No | Planned planting area in hectares (triggers EIA assessment if >5ha) | |
| purpose | No | Planting purpose (e.g. woodland creation, agroforestry, riparian, community) | |
| tree_type | No | Species group (e.g. broadleaf, conifer, mixed) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses specific output domains (EWCO, EIA thresholds, buffers, species) beyond just 'guidance', but lacks safety disclosures (read-only vs mutation), authentication requirements, or rate limiting information that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently front-loaded with the core action. Every clause adds specific value (grants type, regulatory thresholds, ecological constraints). No redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 optional parameters and no output schema, the description adequately covers what information is returned (financial, regulatory, ecological). Minor gap: does not describe output structure/format (JSON schema, text, etc.) or behavior when called with no parameters, which would help agent understand default behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing baseline 3. The description adds domain context linking parameters to outputs (e.g., connecting area_ha to EIA screening thresholds mentioned in description), but does not elaborate on parameter formats, validation rules, or interdependencies beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') and resource ('woodland planting guidance'), and enumerates specific content domains (EWCO grants, EIA screening thresholds, ancient woodland buffers, species recommendations). Clearly distinguishes from sibling 'check_' and 'get_*_rules' tools by focusing on planting guidance rather than compliance verification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through content listing (grants, EIA thresholds) but provides no explicit when-to-use guidance or alternatives. Does not clarify relationship to sibling tools like check_sssi_consent or get_felling_licence_rules which might be needed alongside this guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_rights_of_way_rulesAInspect
Get public rights of way obligations by path type and issue. Returns minimum widths, cropping rules, reinstatement deadlines, and obstruction liability.
| Name | Required | Description | Default |
|---|---|---|---|
| issue | No | Issue type (e.g. width, crops, ploughing, obstruction, gates, stiles) | |
| path_type | No | Path type (footpath, bridleway, restricted_byway, byway) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Compensates by disclosing return content (minimum widths, cropping rules, reinstatement deadlines, obstruction liability) which is valuable without output schema. However, omits operational details like data freshness, caching, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences. First establishes purpose and filters; second describes return values. No redundant words, immediately front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given 3 optional parameters with full schema coverage. Effectively compensates for missing output schema by detailing specific rule types returned (widths, cropping, liability).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions and examples for each parameter. Description mentions 'path type and issue' which aligns with parameters but adds no additional semantic context or format constraints beyond what schema already provides. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + resource ('public rights of way obligations') + filtering dimensions ('by path type and issue'). Distinct from sibling tools like check_hedgerow_rules and get_common_land_rules through specific domain focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage pattern through 'by path type and issue' but lacks explicit guidance on when to use this versus the broader search_land_rules tool or prerequisites for queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tpo_rulesAInspect
Get Tree Preservation Order rules. Returns consent requirements, exemptions, process, and penalties under TCPA 1990 Part VIII.
| Name | Required | Description | Default |
|---|---|---|---|
| scenario | No | Scenario (e.g. works, dead tree, conservation area, penalty) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses what the tool returns (consent requirements, exemptions, process, penalties). However, it omits technical operational details like rate limits, caching behavior, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence establishes purpose, second details return value. Information is front-loaded and appropriately sized for a 2-parameter lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates well for missing output schema by detailing return content (consent requirements, exemptions, process, penalties). Legal citation provides domain context. Minor gap in not mentioning that both parameters are optional (0 required params).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. The description implicitly supports the 'scenario' parameter by listing specific return types (penalties, exemptions) that align with scenario examples, but does not explicitly discuss parameter semantics or provide usage examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get) and resource (Tree Preservation Order rules) with precise legal scope (TCPA 1990 Part VIII). Clearly distinguishes from siblings like check_hedgerow_rules and get_felling_licence_rules through its specific legal domain focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit guidance by citing TCPA 1990 Part VIII, indicating this is for TPO-specific queries, but lacks explicit when-to-use guidance comparing it to similar environmental planning tools like get_felling_licence_rules or check_hedgerow_rules.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesAInspect
List all data sources with authority, URL, license, and freshness info.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It partially compensates by disclosing the output structure (authority, URL, license, freshness fields), but fails to mention operational details like pagination, caching behavior, rate limits, or whether freshness values are calculated in real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of ten words. It is front-loaded with the action verb and every word contributes specific information about either the operation or the returned data structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple zero-parameter tool without an output schema, the description is reasonably complete. It compensates for the missing output schema by enumerating the key fields returned (authority, URL, license, freshness), providing sufficient context for an agent to understand the utility of the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema description coverage (trivially), the baseline applies. The description does not need to explain parameter semantics, earning the default score for this case.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List), resource (data sources), and specific fields returned (authority, URL, license, freshness). However, it does not explicitly differentiate from the sibling tool 'check_data_freshness', which also deals with freshness information, or clarify when to list sources versus querying specific rules.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not clarify the distinction between listing sources with freshness metadata versus actively 'checking' freshness with check_data_freshness, nor when to use this versus the about tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_land_rulesAInspect
Full-text search across all land and woodland management rules. Use for broad queries about hedgerows, felling, SSSI, rights of way, common land, or planting.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 20, max: 50) | |
| query | Yes | Free-text search query | |
| topic | No | Filter by topic (hedgerow, felling, sssi, rights_of_way, common_land, planting) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Description Quality Score
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the 'full-text' search mechanism but lacks details about return format (what a 'result' contains), ranking behavior, or pagination beyond the limit parameter. Given no output schema exists, this is a moderate gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes capability, second defines usage context with specific examples. Every word earns its place and critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a search tool with 4 parameters and full schema coverage. The description covers the search scope and example topics adequately. A brief note about returned results would improve this given the lack of output schema, but it is sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description adds value by listing concrete topic examples (hedgerows, felling, SSSI, etc.) that map directly to the 'topic' parameter options, helping the agent understand valid query contexts beyond the abstract schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'Full-text search across all land and woodland management rules' with specific verb (search) and resource (rules). It effectively distinguishes from sibling tools like 'check_hedgerow_rules' or 'get_common_land_rules' by emphasizing the 'all' scope and 'full-text' capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Use for broad queries' explicitly defines when to use this tool versus the specific 'get_' and 'check_' siblings. It implies that narrow/specific lookups should use those alternatives, though it doesn't explicitly state 'don't use for X, use Y instead'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.