Skip to main content
Glama

uk-environmental-compliance

Server Details

UK environmental compliance for farming — NVZ rules, spreading windows, SSAFO

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ansvar-Systems/uk-environmental-compliance-mcp
GitHub Stars
0

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsB

Average 3.3/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation4/5

Tools target distinct environmental domains (water abstraction, NVZ, buffer strips, storage), though 'check_nvz_rules' and 'get_spreading_windows' both involve manure spreading periods which could cause momentary hesitation. The search tool clearly complements rather than duplicates the specific getters.

Naming Consistency3/5

Uses consistent snake_case, but mixes 'check_' and 'get_' prefixes for similar retrieval operations (e.g., 'check_buffer_strip_rules' vs 'get_abstraction_rules'). 'about' breaks the verb-noun pattern, though this is common for metadata tools.

Tool Count4/5

11 tools is appropriate for the agricultural compliance domain, covering specific rule sets (NVZ, SSAFO, EIA) plus operational metadata. Slightly weighted toward metadata (3/11 tools), but each specific environmental domain earns its place.

Completeness4/5

Covers the core lifecycle of agricultural environmental compliance: screening (EIA), nutrient management (NVZ, spreading, storage), water protection (abstraction, buffer strips), and pollution prevention. The catch-all search tool mitigates minor gaps in specific regulations (e.g., pesticides).

Available Tools

11 tools
aboutAInspect

Get server metadata: name, version, coverage, data sources, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses what data is retrieved (name, version, coverage, data sources, links). However, it omits safety characteristics, side effects, or return format details that would be useful for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded with the action verb and efficiently enumerates the returned metadata fields without waste. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple metadata tool with no parameters and no output schema, the description is complete. It compensates for the missing output schema by explicitly listing the fields returned (name, version, coverage, data sources, and links).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which establishes a baseline score of 4 per the evaluation rules. No parameter description is necessary or present.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('server metadata') and clearly distinguishes this from siblings by identifying it as the metadata/discovery tool rather than an environmental regulation query tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is for discovering server capabilities by listing returned fields (name, version, coverage, etc.), it lacks explicit guidance on when to use this versus operational tools or prerequisites for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_buffer_strip_rulesCInspect

Check buffer strip requirements for watercourses.

ParametersJSON Schema
NameRequiredDescriptionDefault
activityNoActivity near watercourse
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
watercourse_typeNoWatercourse type (e.g. main river, ordinary watercourse)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but provides none. It does not indicate whether this is a read-only operation, what format results take, whether it requires specific permissions, or if there are rate limits. The term 'Check' implies safety but is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with the verb front-loaded. While extremely brief (six words), it contains no redundant or wasteful language. However, the brevity approaches under-specification for a domain-specific regulatory tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the specialized domain (environmental/agricultural buffer strip regulations) and absence of an output schema, the description is incomplete. It fails to explain what 'buffer strips' are, what the returned requirements look like, or how to interpret the results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing baseline documentation for all three parameters. The description adds no additional parameter context (e.g., valid values for watercourse_type, examples of activities, or the GB default implication), but meets the baseline expectation given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Check') and resource ('buffer strip requirements for watercourses'), specifying the domain. However, it fails to differentiate from sibling environmental rule tools like check_nvz_rules or get_abstraction_rules, leaving ambiguity about which specific regulatory domain this covers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like search_environmental_rules or check_nvz_rules. No prerequisites or conditions are mentioned, despite the specialized agricultural/environmental context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_data_freshnessAInspect

Check when data was last ingested, staleness status, and how to trigger a refresh.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It adds valuable context about return values (ingestion timestamp, staleness status, refresh methodology) but fails to clarify safety properties (read-only vs. destructive), performance characteristics, or whether 'staleness' uses fixed or configurable thresholds.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence (12 words) that front-loads the action verb and packs three distinct data points into zero redundancy. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no input parameters, no output schema), the description adequately explains what information is returned. It would benefit from defining staleness criteria or noting if the refresh trigger is a URL, command, or boolean flag, but it covers the essential contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (empty schema). Per evaluation rules, zero-parameter tools receive a baseline score of 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Check') and clearly identifies three distinct information resources: last ingestion time, staleness status, and refresh triggers. It effectively distinguishes the tool from environmental-rule siblings (e.g., check_buffer_strip_rules, get_abstraction_rules) by focusing on data/metadata health rather than regulatory content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (checking freshness before relying on data, learning how to refresh), but provides no explicit guidance on when to use this versus alternatives, thresholds for staleness concerns, or prerequisites for invoking a refresh.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_nvz_rulesBInspect

Check Nitrate Vulnerable Zone rules for a farming activity. Returns closed periods, nitrogen limits, and conditions.

ParametersJSON Schema
NameRequiredDescriptionDefault
seasonNoMonth or season to check (e.g. November, March)
activityYesFarming activity (e.g. spreading slurry, applying fertiliser)
soil_typeNoSoil type (e.g. sandy, shallow, clay)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially compensates by disclosing return values ('Returns closed periods, nitrogen limits, and conditions'), which is useful given the lack of output schema. However, it omits other behavioral traits like rate limits, data freshness, whether the query is synchronous, or if any logging occurs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the core purpose (checking NVZ rules), and the second immediately clarifies the return payload. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 4-parameter input with complete schema documentation and no output schema, the description adequately compensates by outlining what the tool returns. It successfully conveys the tool's function and output type, though it could be improved by mentioning error handling or data source limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description mentions 'farming activity' which aligns with the required 'activity' parameter, but adds no syntax guidance, format examples, or semantic relationships between parameters (e.g., how season and soil_type interact) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Check[s] Nitrate Vulnerable Zone rules' (specific verb + domain-specific resource) and mentions it applies to 'a farming activity.' It implicitly distinguishes from siblings like check_buffer_strip_rules or search_environmental_rules by specifying the NVZ domain, though it doesn't explicitly state when to prefer this over the general search tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like search_environmental_rules (which could also return NVZ rules) or get_spreading_windows (which overlaps with 'closed periods'). It lacks prerequisites, exclusions, or selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_abstraction_rulesCInspect

Get water abstraction licensing rules by source type and volume.

ParametersJSON Schema
NameRequiredDescriptionDefault
source_typeNoWater source (e.g. surface water, groundwater, tidal)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
volume_m3_per_dayNoPlanned abstraction volume in cubic metres per day
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify whether this performs a lookup or calculation, what the returned rules contain (thresholds, restrictions, permit requirements), or error handling behavior. The 'Get' verb implies read-only access but this is not confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with no wasted words, front-loading the action and resource. However, extreme brevity becomes a liability given the lack of annotations and output schema, leaving the description underspecified rather than elegantly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description should explain what 'rules' entails (regulatory thresholds, licensing conditions, restrictions) and hint at return structure. It also fails to clarify relationships with sibling tools like 'list_sources' or 'search_environmental_rules', leaving selection criteria ambiguous.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'source type and volume' acknowledging two parameters but adds no semantic context beyond the schema (e.g., valid ranges for volume, behavior when jurisdiction defaults to GB).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Get), resource (water abstraction licensing rules), and key filtering dimensions (source type and volume). However, it does not explicitly distinguish this tool from the sibling 'search_environmental_rules' or explain when to prefer this specific endpoint over the generic search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search_environmental_rules' or 'check_buffer_strip_rules'. It omits prerequisites, required vs optional parameter guidance, and success/failure conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_eia_screeningCInspect

Check EIA screening thresholds for agricultural projects.

ParametersJSON Schema
NameRequiredDescriptionDefault
area_haNoProposed project area in hectares
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
project_typeYesProject type (e.g. uncultivated land, livestock, irrigation)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full responsibility for behavioral disclosure but offers almost none. It does not clarify what 'Check' entails (validation, calculation, threshold comparison), whether the operation is read-only, potential error conditions, or the nature of the returned data (compliance status vs. threshold values).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of seven words with the action verb front-loaded. There is no redundancy or wasted text; every word contributes to the core purpose statement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the regulatory complexity of Environmental Impact Assessment screening and the absence of both annotations and output schema, the description is insufficient. It fails to define 'EIA', explain what constitutes a screening threshold, or describe the expected return value format for an agent attempting to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, documenting all three parameters ('area_ha', 'jurisdiction', 'project_type') completely. The description adds no additional semantic context (e.g., valid ranges for hectares, jurisdiction limitations), warranting the baseline score for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the core action ('Check') and specific resource ('EIA screening thresholds') for a defined scope ('agricultural projects'). While it specifies the regulatory domain (EIA), it does not explicitly differentiate from siblings like 'search_environmental_rules' or other 'check_' tools that also query agricultural regulations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling tools (e.g., 'check_nvz_rules', 'search_environmental_rules'). There is no mention of prerequisites, required context, or exclusion criteria for invoking this specific endpoint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pollution_preventionCInspect

Get pollution prevention guidance for a farming activity.

ParametersJSON Schema
NameRequiredDescriptionDefault
activityYesFarming activity (e.g. silage making, sheep dipping)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but fails to specify the response format (structured data vs. prose), whether this requires authentication, rate limits, or what happens if an invalid activity is specified. While 'Get' implies read-only, safety characteristics and error behaviors are undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is efficiently structured with no redundant words, front-loading the action ('Get') and scope ('pollution prevention guidance'). However, extreme brevity becomes a limitation given the lack of annotations and output schema, suggesting the description could carry more information without violating conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich ecosystem of sibling tools (10+ environmental rule-checkers), the description lacks necessary context on how this 'guidance' differs from 'rules' or 'requirements' returned by siblings. With no output schema provided, the description should explain what form the guidance takes and its relationship to specific regulatory tools, but it does neither.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete documentation for both 'activity' (with examples) and 'jurisdiction' (ISO code format). The description mentions 'farming activity' which aligns with the schema but does not add semantic context beyond what the schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'pollution prevention guidance' for 'farming activity' using specific verb-object structure. However, it does not explicitly differentiate from siblings like 'check_nvz_rules' or 'search_environmental_rules' which also relate to pollution compliance, leaving some ambiguity about whether this returns general guidance versus specific regulatory requirements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the numerous sibling rule-checking tools (e.g., 'check_buffer_strip_rules', 'get_storage_requirements'). It omits prerequisites, exclusions, or scenarios where this guidance tool is preferred over specific regulatory checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_spreading_windowsAInspect

Get open and closed spreading periods for a manure type on a land type.

ParametersJSON Schema
NameRequiredDescriptionDefault
nvzNoWhether the land is in an NVZ (default: true)
land_typeYesLand type (e.g. arable, grassland, sandy, shallow)
manure_typeYesManure type (e.g. slurry, poultry manure, manufactured fertiliser)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'open and closed' periods (indicating the response contains both permitted and prohibited windows), but fails to disclose safety characteristics, idempotency, potential errors, or that this is a read-only lookup operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 11 words with no filler. It is front-loaded with the action verb and every word contributes to defining the tool's scope. No restructuring or shortening would improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of both annotations and output schema, the description adequately covers the core retrieval purpose but leaves gaps regarding return value structure and domain-specific behavioral constraints. It is sufficient for tool selection but would benefit from noting the agricultural domain context or data freshness implications given the check_data_freshness sibling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all four parameters (manure_type, land_type, nvz, jurisdiction) adequately documented in the schema. The description references the two required parameters in its text but adds no additional semantic context—such as explaining that NVZ stands for Nitrate Vulnerable Zone or that jurisdiction defaults to GB—beyond what the schema already provides. Baseline score appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and clearly identifies the resource ('open and closed spreading periods') along with the key input dimensions ('manure type', 'land type'). It effectively distinguishes from siblings like check_nvz_rules or get_storage_requirements by focusing specifically on temporal spreading windows rather than general compliance or infrastructure requirements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use the tool (when needing to determine spreading periods for specific manure and land types), but provides no explicit guidance on when NOT to use it, prerequisites, or how it relates to the check_nvz_rules sibling despite sharing the NVZ parameter context. Usage must be inferred from the parameter schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_storage_requirementsCInspect

Get SSAFO storage requirements for a material.

ParametersJSON Schema
NameRequiredDescriptionDefault
volumeNoPlanned storage volume (for context)
materialYesMaterial to store (e.g. slurry, silage, fuel oil)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to explain what 'SSAFO' stands for, what format requirements are returned (text, structured data, compliance codes), or whether the tool validates material types against the jurisdiction. It implies a read operation but lacks specifics on error handling or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence with no wasted words, properly front-loaded with the action verb. However, at only seven words, it is arguably undersized for a regulatory compliance tool with three parameters and no output schema explanation, leaving significant contextual gaps.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of agricultural regulatory compliance and the lack of output schema or annotations, the description is incomplete. It fails to explain the SSAFO regulatory domain, what 'requirements' entail (capacity, construction standards, siting rules), or how to interpret results, leaving agents without sufficient context to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all three parameters (material, volume, jurisdiction) fully documented in the schema. The description mentions 'material' implicitly but adds no semantic value regarding parameter formats, valid values, or relationships between parameters (e.g., that jurisdiction defaults to GB). Baseline 3 is appropriate given the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('SSAFO storage requirements'), clearly indicating it retrieves regulatory storage requirements for materials like slurry or silage. However, it does not explicitly differentiate from sibling tools like get_pollution_prevention or check_nvz_rules, assuming the user knows SSAFO refers to specific UK agricultural regulations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings such as get_pollution_prevention or check_buffer_strip_rules. It does not mention prerequisites, regulatory scope, or exclusion criteria for when this tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesAInspect

List all data sources with authority, URL, license, and freshness info.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It compensates partially by revealing what fields are returned (authority, URL, license, freshness), substituting for the missing output schema. However, it fails to disclose read-only safety, pagination behavior, or performance characteristics that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence of 11 words that immediately conveys the verb, scope, and return value structure. Every word earns its place; no refactoring could improve information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, simple list operation) and lack of output schema, the description adequately compensates by detailing the returned metadata fields. It loses a point for failing to indicate the safe, non-destructive nature of the operation in the absence of readOnlyHint annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline of 4. The description correctly omits parameter discussion since none exist, neither adding confusion nor unnecessary verbosity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('data sources'), and distinguishes this from siblings like check_data_freshness by specifying the metadata fields returned (authority, URL, license, freshness). It would be a 5 if it explicitly clarified this is a catalog/metadata tool versus the query-specific siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to use list_sources vs check_data_freshness or get_storage_requirements). There are no stated prerequisites, exclusion criteria, or workflow positioning.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_environmental_rulesAInspect

Full-text search across all environmental compliance data: NVZ rules, storage requirements, buffer strips, abstraction, pollution prevention, EIA screening.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query
topicNoFilter by topic (e.g. nvz, storage, buffer_strips, abstraction, pollution, eia)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While it discloses the data scope (NVZ, storage, etc.), it fails to describe search behavior (fuzzy vs exact matching, relevance ranking), return structure, or result cardinality beyond the limit parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently structured with the core action front-loaded ('Full-text search'), followed by scope clarification ('across all environmental compliance data'), and specific domain enumeration. No redundant or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and no output schema, the description adequately covers input semantics but lacks return value documentation. For a search tool with no annotations, it should ideally describe result format or search ranking behavior to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value by enumerating specific topic filter values (nvz, storage, buffer_strips, abstraction, pollution, eia) that map to the 'topic' parameter, providing concrete examples beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Full-text search across all environmental compliance data' with a specific verb and resource. It distinguishes from siblings (check_nvz_rules, get_storage_requirements, etc.) by emphasizing the cross-cutting search scope versus targeted retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies broad search functionality by stating 'across all' and listing domains covered by sibling tools, suggesting this is for comprehensive discovery. However, it lacks explicit guidance on when to use this versus the specific check/get tools for targeted rule validation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.