Skip to main content
Glama

Server Details

UK farm subsidy schemes — SFI, Countryside Stewardship, payment rates, eligibility

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ansvar-Systems/uk-farm-subsidies-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are largely distinct with clear purposes across scheme discovery, eligibility checking, compliance, and payments. Minor overlap exists between search_schemes (which also searches guidance) and search_application_guidance, and between list_scheme_options and get_payment_rates regarding payment data, but descriptions clarify distinct primary use cases.

Naming Consistency4/5

Nine of ten tools follow a consistent verb_noun pattern (check_, get_, list_, search_). The 'about' tool breaks this convention as a standalone noun. All use snake_case consistently.

Tool Count5/5

Ten tools provide comprehensive coverage of the UK farm subsidies domain (schemes, eligibility, compliance, rates, applications, metadata) without bloat. Each tool serves a specific information need appropriate for querying subsidy data.

Completeness4/5

Strong coverage for a read-only subsidy information server: scheme discovery, eligibility matching, payment rates, cross-compliance requirements, and application guidance are all addressed. Minor gap: no specific 'get' retrieval for individual application guidance items (only search), but search functionality likely suffices.

Available Tools

10 tools
aboutAInspect

Get server metadata: name, version, coverage, data sources, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully enumerates the returned fields (name, version, coverage, data sources, links), compensating for the missing output schema. However, it omits safety characteristics (read-only status) or performance traits (caching).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence. Every element serves a purpose: the verb establishes the action, the resource identifies the domain, and the colon-delimited list specifies the return payload structure with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, no nested objects) and lack of output schema, the description adequately compensates by detailing the return values. It successfully communicates what the caller receives, though it could optionally mention the response format (e.g., JSON object).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation guidelines, this establishes a baseline of 4. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb 'Get' and clearly identifies the resource as 'server metadata'. It distinguishes itself from the nine sibling tools (which handle business logic like compliance and payments) by focusing on introspection/server information rather than application data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While there are no explicit 'when to use' instructions, the tool's purpose as a metadata endpoint is self-evident compared to its domain-specific siblings. However, it lacks explicit guidance on when to prefer this over list_sources or check_data_freshness for understanding data provenance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_data_freshnessAInspect

Check when data was last ingested, staleness status, and how to trigger a refresh.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates what information the tool returns (ingestion timestamp, staleness flag, refresh instructions), which is valuable given the lack of output schema. However, it omits safety traits (read-only confirmation), rate limits, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 12 words with zero waste. It is front-loaded with the action verb 'Check' and immediately lists the three key information categories provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no parameters and no output schema, the description adequately covers the tool's purpose by enumerating the freshness metadata it returns. For a simple diagnostic utility, this is sufficient, though explicit mention of the output format would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, which per the evaluation rules sets a baseline score of 4. The description does not need to compensate for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks 'when data was last ingested, staleness status, and how to trigger a refresh,' providing specific actions and scope. However, it doesn't explicitly clarify that this monitors the agricultural scheme data (vs. the business logic tools like get_payment_rates), though the distinction is implicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool does but provides no guidance on when to use it versus the other data retrieval tools (e.g., 'use this before querying schemes to verify freshness'). There are no when-not exclusions or workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_eligibilityCInspect

Find scheme options matching land type, current practice, or farm type.

ParametersJSON Schema
NameRequiredDescriptionDefault
farm_typeNoFarm type (e.g. mixed, arable, livestock)
land_typeNoLand type (e.g. arable, grassland, moorland)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
current_practiceNoCurrent farming practice (e.g. cover cropping, soil testing)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It fails to specify whether this is a read-only operation, what constitutes a 'match' (exact vs. fuzzy), what the return format looks like, or error handling behavior when no schemes match the criteria.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of ten words with no filler. It is appropriately front-loaded with the verb and resource, making it scannable for an agent scanning tool listings.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given four parameters (all optional), no output schema, and zero annotations, the description is insufficient. It does not explain the return structure, how the matching logic works, or what 'eligibility' means in this context (e.g., whether it returns boolean eligibility status or qualifying scheme objects).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline score of 3. The description mentions three of the four parameters (omitting 'jurisdiction') but adds no semantic value beyond the schema descriptions—no format guidance, validation rules, or inter-parameter dependencies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Find scheme options') and the filtering criteria (land type, current practice, farm type). However, it does not differentiate from siblings like 'search_schemes' or 'list_scheme_options', leaving ambiguity about when to use this specific tool versus alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the eight sibling tools available, particularly 'search_schemes' or 'list_scheme_options'. There is no mention of prerequisites, exclusion criteria, or workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cross_complianceBInspect

Get cross-compliance requirements (GAEC/SMR) by ID or topic.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoSearch topic (e.g. buffer strips, water pollution)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
requirement_idNoRequirement ID (e.g. gaec-1, smr-1)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description does not confirm idempotency, disclose error behavior (e.g., what happens if neither ID nor topic is provided), or describe the return format/structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It is front-loaded with the action verb and immediately qualifies the resource and access methods. Every phrase earns its place, including the parenthetical domain specification.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter schema with full coverage, the description adequately covers the core retrieval purpose. However, with no output schema and no annotations, it should ideally mention the return structure (e.g., whether it returns a single requirement or list) or jurisdictional scope behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds domain context by mapping the abstract parameters to 'cross-compliance' and expanding the acronym (GAEC/SMR), but does not add syntactic details, validation rules, or semantic relationships between parameters (e.g., that requirement_id and topic are mutually exclusive search modes).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get'), resource ('cross-compliance requirements'), and access patterns ('by ID or topic'). It specifies the domain context (GAEC/SMR), distinguishing it from sibling tools like get_scheme_details or search_schemes. However, it does not explicitly contrast with similar lookup tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions two access patterns ('by ID or topic') but provides no guidance on when to use ID versus topic, nor when to prefer this tool over search_schemes or get_scheme_details. No prerequisites or exclusion criteria are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_payment_ratesBInspect

Get payment rates for a scheme, optionally filtered to a specific option.

ParametersJSON Schema
NameRequiredDescriptionDefault
option_idNoSpecific option ID to filter to
scheme_idYesScheme ID
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It implies a read-only operation via 'Get,' but fails to describe the return format (list vs. object), error handling for invalid scheme IDs, data freshness, or whether rates include historical data. This is a significant gap for a tool with no output schema or safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently front-loaded with the core action ('Get payment rates') and wastes no words. Every clause serves a purpose: identifying the resource, the required scope (scheme), and the optional filter (option).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter structure with complete schema coverage, the description is minimally viable. However, lacking an output schema, the description should have indicated the expected return structure (e.g., whether it returns a single rate or a table of rates). The omission of jurisdictional context in the description also leaves a small gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description maps 'scheme' to 'scheme_id' and 'specific option' to 'option_id', reinforcing the schema semantics. However, it omits any mention of the 'jurisdiction' parameter (defaulting to 'GB'), which is a missed opportunity to add domain context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get'), resource ('payment rates'), and scope ('for a scheme'). It implicitly distinguishes from 'get_scheme_details' (general metadata vs. financial rates) and 'list_scheme_options' (listing available options vs. retrieving rates for them), though it does not explicitly name these siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it hints at parameter usage with 'optionally filtered,' it provides no explicit guidance on when to use this tool versus siblings like 'get_scheme_details' or 'check_eligibility'. It does not state prerequisites (e.g., obtaining option IDs from 'list_scheme_options' first) or when this tool is inappropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_scheme_detailsBInspect

Get full details for a subsidy scheme including all available options.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_idYesScheme ID (e.g. sustainable-farming-incentive)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Get' implies read-only access, the description lacks specifics on return format, what constitutes 'full details,' caching behavior, or any prerequisites. It mentions 'all available options' but doesn't clarify if these are nested objects or flat lists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with zero waste. The core action ('Get full details') appears first, followed by scope clarification ('including all available options'). Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter tool with complete schema coverage, though the absence of an output schema creates a gap. The description partially compensates by mentioning 'full details' and 'all available options,' but could specify what data fields are included (e.g., eligibility rules, payment schedules) to help the agent validate if this meets its information needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters (scheme_id, jurisdiction) fully documented including example formats and defaults. The description adds no additional parameter guidance beyond what the schema provides, which is appropriate given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'full details for a subsidy scheme' with the specific scope of 'all available options.' This distinguishes it from search_schemes (likely returns summaries) though it could better differentiate from sibling list_scheme_options.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like search_schemes (for discovery) or list_scheme_options (for option enumeration). The agent cannot determine if this is the right tool for initial exploration versus deep retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_scheme_optionsBInspect

List all options within a scheme with codes, names, and payment rates.

ParametersJSON Schema
NameRequiredDescriptionDefault
scheme_idYesScheme ID
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions 'payment rates' implying financial data sensitivity, but lacks disclosure on caching, permissions required, rate limits, or error behaviors (e.g., invalid scheme_id handling).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, 10 words, front-loaded action verb. No filler or redundancy. Efficiently communicates core operation and return payload structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing the three key return fields (codes, names, payment rates). Given simple 2-parameter input structure, this is sufficient, though jurisdiction context (default GB) could be mentioned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (scheme_id and jurisdiction documented). Description implies scheme_id requirement via 'within a scheme' but adds no syntax guidance or semantic context beyond the schema itself. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' and specific resource 'options within a scheme'. Mentions returned fields (codes, names, payment rates) giving scope. However, it doesn't clearly differentiate from sibling 'get_payment_rates' which may cause confusion about which tool to use for rate lookups.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus siblings like 'get_scheme_details' or 'get_payment_rates'. No mention of prerequisites (e.g., obtaining scheme_id from search_schemes first) or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesAInspect

List all data sources with authority, URL, license, and freshness info.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It partially compensates by disclosing what data is returned (authority, URL, license, freshness), but fails to mention safety characteristics, rate limits, or whether 'freshness info' represents cached metadata versus live checks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that is front-loaded and efficient. Every clause serves a purpose: defining the action, scope, and return value structure without filler words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter listing tool without an output schema, the description is reasonably complete. It compensates for the missing output schema by enumerating the specific fields returned, though it could clarify the return format (array vs object).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per scoring guidelines, this warrants a baseline score of 4, as there are no parameter semantics to describe beyond the schema itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (List) and resource (data sources), and specifies the returned fields (authority, URL, license, freshness). However, it lacks explicit differentiation from sibling tools like 'check_data_freshness' which may overlap in functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'check_data_freshness' or 'about'. The description does not specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_application_guidanceAInspect

Search for application guidance: deadlines, forms, how to apply, and scheme rules.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully defines the search scope (what content is indexed), but does not address safety characteristics (read-only status), rate limits, result ranking, or whether results include full documents or summaries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with efficient colon-separated list of content types. Every word earns its place; no redundancy or filler. The structure front-loads the verb ('Search') and immediately qualifies the resource domain.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a three-parameter search tool without output schema or annotations, the description adequately covers the functional domain but leaves gaps regarding return format, authentication requirements, and result structure. Sufficient for basic invocation but lacks operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for all three parameters (query, limit, jurisdiction with ISO code). Since the schema is self-documenting, the baseline score applies; the description adds no additional parameter-specific context beyond the conceptual search domain.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for 'application guidance' and specifically enumerates the content types covered (deadlines, forms, how to apply, scheme rules). This effectively distinguishes it from sibling 'search_schemes' by focusing on procedural application details rather than general scheme discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the examples imply this tool is for application process queries, there is no explicit guidance on when to use this versus 'search_schemes' or 'get_scheme_details'. The differentiation is left to the agent to infer from the content examples provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_schemesBInspect

Search farm subsidy schemes, SFI options, and application guidance. Use for broad queries about available schemes.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query
scheme_typeNoFilter by scheme type (e.g. agri-environment, countryside-stewardship)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'SFI options' adding domain context, but lacks critical details about result ranking, pagination behavior, data scope (current vs historical), or return format. For a search tool with no output schema, this is insufficient behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero waste. It is front-loaded with the core action ('Search farm subsidy schemes...') followed immediately by usage guidance ('Use for broad queries...'), making it easy to scan and comprehend.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no annotations, and no output schema, the description is minimally viable but incomplete. It adequately covers the tool's purpose but omits expected details for a search tool such as result characteristics, filtering behavior, or data source information that would help an agent predict outcomes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description mentions specific searchable content types ('farm subsidy schemes', 'SFI options') which helps contextualize the 'query' parameter, but does not add syntax details, format examples, or parameter relationships beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search') and resources ('farm subsidy schemes, SFI options, and application guidance'). However, it creates slight ambiguity with the sibling tool 'search_application_guidance' by claiming this tool also searches 'application guidance' without clarifying the distinction between the two tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Use for broad queries about available schemes' provides some contextual guidance distinguishing it from specific retrieval tools like 'get_scheme_details'. However, it fails to explicitly address when to use this versus the similar sibling 'search_application_guidance' or provide explicit exclusions/alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.