Skip to main content
Glama

Server Details

UK farm planning — crop rotation, gross margins, tax rules, APR, tenancy law

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ansvar-Systems/uk-farm-planning-mcp
GitHub Stars
0

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsA

Average 3.7/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are well-differentiated by domain (tax, tenancy, rotation, financials), though `get_apr_guidance` and `get_tax_rules` both touch on inheritance tax, and `check_data_freshness` overlaps slightly with `list_sources` in data metadata. The search tool provides broad discovery while specific getters handle structured retrieval, which is a clear pattern.

Naming Consistency4/5

Strong adherence to `verb_noun` pattern throughout (e.g., `calculate_break_even`, `get_gross_margins`, `list_sources`). The only deviation is `about`, which uses a standalone noun instead of a verb-prefixed action like `get_metadata` or `describe_server`.

Tool Count5/5

Eleven tools is an appropriate scope for UK farm planning, covering financial benchmarks, break-even calculations, crop rotation, tax rules, tenancy law, diversification guidance, and data provenance. No tool feels redundant, and the surface doesn't feel overloaded or incomplete.

Completeness4/5

Covers the core UK-specific farm planning lifecycle well: gross margins and break-even for financial viability, rotation guidance for agronomic planning, and comprehensive regulatory coverage (APR, tenancy types, tax rules, diversification). Minor gaps might include environmental stewardship schemes or livestock-specific planning, but the surface is complete for its apparent scope.

Available Tools

11 tools
aboutAInspect

Get server metadata: name, version, coverage, data sources, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses what metadata fields are returned, but omits behavioral traits like safety (read-only), idempotency, or authentication requirements. Given the simple 'Get' operation and listed fields, this provides minimal viable behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence of nine words. It is front-loaded with the action ('Get server metadata') followed by a colon-delimited list of specific return values. Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (no parameters, simple concept) and lack of output schema, the description compensates adequately by enumerating the specific metadata fields returned. It is complete enough for selection and basic invocation, though safety annotations would have improved it to a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to the rubric, 0 parameters establishes a baseline score of 4. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('server metadata'), and explicitly enumerates the returned fields (name, version, coverage, data sources, links). It clearly distinguishes this introspection tool from its agricultural calculation siblings (calculate_break_even, get_tax_rules, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is an introspection tool for server discovery, it lacks explicit guidance on when to invoke it (e.g., 'Call this first to check server capabilities') or prerequisites. However, the distinct nature of the tool relative to its siblings makes the usage context inferable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_break_evenBInspect

Calculate break-even for a farm enterprise using gross margin benchmarks. Shows costs vs output and whether the enterprise is profitable at benchmark levels.

ParametersJSON Schema
NameRequiredDescriptionDefault
yieldNoOverride yield per unit (not yet used in calculation)
enterpriseYesEnterprise name (e.g. winter wheat, dairy cow)
fixed_costsNoFixed costs per unit (GBP). Default: 0
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
variable_costsNoOverride variable costs per unit (GBP). Default: from database
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It discloses output content ('Shows costs vs output and whether... profitable') but omits operational details: error handling if enterprise unknown, that yield parameter is unused (noted only in schema), data freshness, or side effects. Adequate but incomplete behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads the core action (calculate break-even), second clarifies output format. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% input schema coverage and no output schema, the description adequately explains the conceptual output (profitability assessment). However, lacks coverage of error scenarios, data source limitations (e.g., yield override non-functional), or currency/GBP specification rationale given jurisdiction parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description adds context that calculation uses 'gross margin benchmarks' and mentions 'costs vs output' aligning with fixed/variable cost parameters, but does not add syntax guidance, example values, or clarify jurisdiction defaults beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Calculate break-even), resource (farm enterprise), and method (using gross margin benchmarks). Distinguishes from get_gross_margins by focusing on break-even calculation rather than just retrieving benchmarks. Would be 5 if it explicitly clarified the analytical output vs data retrieval siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit when-to-use guidance or alternative selection criteria. Mentions 'gross margin benchmarks' implying relationship to get_gross_margins, but does not state whether this tool requires that one to be called first, or when to prefer this over simple margin lookup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_data_freshnessAInspect

Check when data was last ingested, staleness status, and how to trigger a refresh.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clarifies that it returns refresh instructions rather than performing the refresh itself ('how to trigger'), but omits safety profile (read-only vs mutation), rate limits, or scope (global vs per-source).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with three parallel clauses covering all key outputs (timestamp, status, action). No filler words; every phrase maps to distinct functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter metadata tool. Mentions conceptual outputs (ingestion time, staleness, refresh method) compensating for missing output schema. Would benefit from noting whether it checks all sources globally or requires prior context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present per schema, establishing baseline 4. Description correctly implies no filtering/selection parameters are needed for this global status check.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Check' with clear resources (ingestion time, staleness status, refresh triggers). Distinct from farm-guidance siblings like get_apr_guidance or calculate_break_even by focusing on data infrastructure/metadata rather than business logic.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context (checking staleness to determine if refresh needed) but lacks explicit when-to-use guidance such as 'Use before querying farm data' or prerequisites like requiring list_sources first. No mention of when NOT to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_apr_guidanceAInspect

Get Agricultural Property Relief (IHT) guidance: relief rates, occupation tests, clawback rules, farmhouse proportionality.

ParametersJSON Schema
NameRequiredDescriptionDefault
scenarioNoAPR scenario (e.g. owner-occupier, tenanted, farmhouse, trust, diversified)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It specifies content scope (what guidance covers) but omits safety characteristics (read-only status), return format, or caching behavior. 'Get' implies safe retrieval, but explicit confirmation is absent for a financial/regulatory tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence front-loaded with the core action. The colon-separated list precisely scopes the guidance domains without verbosity. Zero redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter lookup tool without output schema, the description adequately enumerates return content categories (relief rates, tests, rules). Slight gap in not specifying response format or noting UK-specific scope, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear examples (scenario types, ISO format for jurisdiction). The description lists output content areas rather than parameter semantics, but given the schema is self-documenting, the baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with exact resource 'Agricultural Property Relief (IHT) guidance' and enumerates specific content areas (relief rates, occupation tests, clawback rules, farmhouse proportionality). The APR specificity naturally distinguishes it from sibling get_tax_rules (general tax) and get_diversification_guidance (operational farming).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through specificity (APR vs general tax), but provides no explicit when-to-use guidance comparing it to get_tax_rules or get_diversification_guidance. No prerequisites or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_diversification_guidanceAInspect

Get farm diversification guidance: permitted development classes (Q, R, S), floor area limits, business rates, planning requirements.

ParametersJSON Schema
NameRequiredDescriptionDefault
activityNoActivity (e.g. housing, farm shop, camping, solar, school)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It successfully enumerates content domains covered (Q/R/S classes, rates, requirements) but omits behavioral traits like return format (structured vs. prose), error handling for invalid activities, or data freshness guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded action verb, zero redundancy. Specific planning class codes (Q, R, S) efficiently signal domain scope without excess verbiage. Appropriate density for the tool's complexity level.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter tool with complete input schema. However, lacking output schema, the description could better prepare the agent by indicating whether guidance returns as structured permissions data, plain text, or regulatory citations. Lists output topics but not format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both 'activity' and 'jurisdiction' fully documented with examples and format specifications). Description adds no parameter semantics beyond schema, which is acceptable given complete schema coverage, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + resource ('farm diversification guidance') with explicit scope differentiation from siblings via mention of 'permitted development classes (Q, R, S)', 'business rates', and 'floor area limits'—clearly signals UK planning law specialization versus general search_farm_planning or tax/tenancy siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance provided. Does not clarify relationship to sibling tool 'search_farm_planning' (which could handle broader queries) or prerequisites like needing specific farm business details beforehand.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_gross_marginsBInspect

Get gross margin benchmarks for a farm enterprise: output, variable costs, GM, top/bottom quartile.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNoYear (e.g. 2024/25). Default: latest available.
enterpriseYesEnterprise name (e.g. winter wheat, dairy cow, lowland ewe)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden of behavioral disclosure. It successfully describes the output data structure (top/bottom quartile benchmarks), but fails to indicate safety properties (read-only status), caching behavior, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose followed by a colon-delimited list of return values. Every word earns its place; there is no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description effectively compensates by enumerating the specific benchmark components returned (output, variable costs, GM, quartiles). With simple parameter types and full schema coverage, this provides adequate completeness for a data retrieval tool, though data source or freshness context is absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'farm enterprise' which aligns with the enterprise parameter, but adds no semantic details about the year format (e.g., fiscal vs calendar) or jurisdiction defaults beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Get) and resource (gross margin benchmarks for a farm enterprise) and specifies the data components returned (output, variable costs, GM, quartiles). However, it does not explicitly differentiate from the sibling tool 'calculate_break_even', which also handles financial calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'calculate_break_even' or 'search_farm_planning'. It omits prerequisites, scope limitations, or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_rotation_guidanceAInspect

Get crop rotation guidance: suitability, disease breaks, blackgrass risk, yield impact. Query a single crop or a pair (e.g. "winter wheat,oilseed rape").

ParametersJSON Schema
NameRequiredDescriptionDefault
cropsYesCrop name or comma-separated pair (e.g. "winter wheat,oilseed rape")
soil_typeNoSoil type for context (optional, not yet filtered)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses functional output categories (the four guidance types) but omits operational traits like safety profile, idempotency, error conditions, or rate limits. The 'Get' prefix implies read-only behavior but this is not explicitly confirmed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. First sentence front-loads the tool's purpose and return categories; second sentence immediately addresses the critical query pattern (single vs. pair). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Absence of output schema is partially compensated by enumerating the four guidance categories returned. All three parameters are well-documented in schema. Could improve by noting data limitations (e.g., the soil_type parameter is noted as 'not yet filtered' in schema but description ignores this constraint).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage establishing baseline 3. The description adds value by providing a concrete query example ('winter wheat,oilseed rape') that clarifies the comma-separated pair format for the crops parameter, aiding correct invocation beyond the schema's basic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'crop rotation guidance' with explicit scope differentiation via listed metrics (suitability, disease breaks, blackgrass risk, yield impact). These specific agricultural factors clearly distinguish it from sibling tools like get_diversification_guidance or get_apr_guidance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through specific rotation-focused metrics (disease breaks, blackgrass risk), suggesting when to use it for rotation planning. However, lacks explicit when-not-to-use guidance or named alternatives among the sibling planning tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tax_rulesBInspect

Get agricultural tax rules by topic: Making Tax Digital, farmers averaging, capital allowances, VAT, partnership, inheritance.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesTax topic keyword (e.g. MTD, averaging, capital allowances, VAT)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It enumerates supported tax topics but fails to disclose behavioral traits like error handling (what if topic not found?), data volatility, authentication requirements, or whether results are cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. The colon-separated topic list efficiently conveys supported values without waste. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 2-parameter lookup tool, covering the domain (agricultural tax) and specific topics. However, with no output schema and no annotations, it should ideally describe the return format or structure to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds value by expanding 'MTD' to 'Making Tax Digital' and adding examples ('partnership', 'inheritance') not present in the schema's 'e.g.' list, clarifying valid topic values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('agricultural tax rules'). The enumerated list of specific topics (Making Tax Digital, farmers averaging, etc.) clarifies scope, though it doesn't explicitly differentiate from sibling tools like get_tenancy_rules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to use this versus alternatives (e.g., when to use this vs. get_tenancy_rules or general search). The topic list implies coverage but doesn't constitute usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tenancy_rulesAInspect

Get agricultural tenancy rules: AHA 1986 (lifetime security, succession) vs ATA 1995 FBT (fixed term). Topics: succession, rent, compensation, termination.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoTopic (e.g. succession, rent, compensation, termination)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
tenancy_typeNoAHA_1986 or ATA_1995
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully explains the semantic difference between tenancy types (lifetime security vs fixed term), but fails to disclose operational traits like data source, freshness guarantees, output format, or whether results are canonical legal text or summaries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two dense, front-loaded sentences with zero waste. First sentence establishes scope and key distinction between tenancy types; second lists queryable topics. Perfect information density for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a lookup tool with fully-documented input schema. Captures the essential legal domain context (UK agricultural tenancies). Minor gap: could clarify return format (text vs structured) given the absence of an output schema, but the topic coverage is comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage, the description adds significant value by explaining what AHA_1986 (lifetime security, succession) and ATA_1995 (fixed term) actually mean in practice—critical domain context not captured in the raw enum codes. However, it omits mention of the jurisdiction parameter's GB default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' with specific resource 'agricultural tenancy rules'. Explicitly distinguishes the two UK tenancy regimes (AHA 1986 vs ATA 1995/FBT) and their characteristics, clearly differentiating this from sibling tools like get_tax_rules or get_apr_guidance which handle different legal domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists applicable topics (succession, rent, compensation, termination) implying when to use the tool, but lacks explicit guidance on when to choose between AHA_1986 and ATA_1995 tenancy types or when to use this versus search_farm_planning for broader queries. No mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesAInspect

List all data sources with authority, URL, license, and freshness info.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the output structure (what fields are returned), but lacks information about safety traits (read-only status), caching behavior, rate limits, or error conditions that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that immediately communicates the verb, resource, and return value structure. There is no redundancy or wasted language; every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero parameters and no output schema, the description appropriately compensates by listing the specific fields returned (authority, URL, license, freshness). For a simple listing operation, this is nearly complete, though mentioning caching behavior or data volume expectations would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation guidelines, tools with no parameters receive a baseline score of 4, as there are no parameter semantics to clarify beyond what the empty schema already indicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List all data sources') and specifies exactly what metadata fields are returned ('authority, URL, license, and freshness info'). However, it does not explicitly differentiate from the sibling tool 'check_data_freshness', which also concerns data freshness.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given the sibling 'check_data_freshness' also deals with freshness metadata, the lack of explicit when-to-use guidance could lead to agent confusion about which tool to select for freshness-related queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_farm_planningAInspect

Full-text search across all farm planning data: rotation, margins, tax, APR, tenancy, diversification.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query
topicNoFilter by topic (e.g. rotation, tax, tenancy, diversification)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions 'full-text' mechanism and data scope, but omits critical behavioral details: return format/structure, whether results are ranked by relevance, pagination behavior beyond the limit parameter, or any permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently front-loaded with action verb. Domain list provides maximum information density without redundancy. Every element earns its place; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given 100% schema coverage, but insufficient for a search tool lacking output schema and annotations. Missing: result format description, what constitutes a 'match', or result ordering behavior. Minimum viable but clear gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description lists searchable domains (rotation, tax, etc.) which reinforces the 'topic' parameter examples, but adds no semantic depth to 'query', 'limit', or 'jurisdiction' beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Full-text search') and resource ('farm planning data') while explicitly listing covered domains (rotation, margins, tax, APR, tenancy, diversification). Clearly distinguishes from sibling 'get_' tools by indicating cross-cutting search capability vs. specific topic retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through domain listing (broad search across multiple topics), but lacks explicit when-to-use guidance comparing it to specific sibling tools like get_tax_rules or get_rotation_guidance. No mention of when to prefer specific getters over this search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.