Server Details
UK farm grants — FETF items, Capital Grants, deadlines, stacking rules
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Ansvar-Systems/uk-farm-grants-mcp
- GitHub Stars
- 0
See and control every tool call
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Tools are largely distinct, but check_data_freshness and list_sources overlap regarding data freshness information, potentially confusing agents about which to query for data staleness. The check_* prefix tools (deadlines, stacking) are well-differentiated from get_* tools by their specific validation purposes.
Strong adherence to snake_case verb_noun convention (e.g., check_deadlines, estimate_grant_value, search_grants). The only deviation is 'about', which uses a simple noun rather than a verb-led pattern, though this is a common convention for server metadata endpoints.
Ten tools is an ideal count for this domain, covering the complete user journey from discovery (search_grants) through details (get_grant_details, get_eligible_items), financial planning (estimate_grant_value, check_stacking), deadlines (check_deadlines), and application guidance (get_application_process), plus necessary metadata tools.
Comprehensive coverage for a grant information server, including search, eligibility details, financial calculations, deadline tracking, and application processes. Minor gap: no explicit comparison or 'check my eligibility' tool for personalized assessment, though get_grant_details provides eligibility criteria. Data governance tools (check_data_freshness, list_sources) ensure transparency.
Available Tools
10 toolsaboutAInspect
Get server metadata: name, version, coverage, data sources, and links.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses what data is returned, but omits safety characteristics (read-only, idempotent), rate limits, or cache behavior that would help an agent understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the action and resource, followed by a colon-delimited list of specific return fields. Zero redundancy; every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (zero parameters) and lack of output schema, the description adequately compensates by listing the specific metadata fields returned. It misses only operational details like rate limiting or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the scoring rules, this establishes a baseline score of 4. The description correctly implies no arguments are needed by focusing entirely on return value semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('server metadata') and enumerates the exact fields returned (name, version, coverage, data sources, links). This clearly distinguishes it from sibling tools which are all grant-operation focused (check_data_freshness, search_grants, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage as a discovery/introspection tool by listing metadata fields, but provides no explicit guidance on when to call it (e.g., 'call at session start') or when not to use it versus operational siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_data_freshnessAInspect
Check when data was last ingested, staleness status, and how to trigger a refresh.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies that the tool returns information about 'how to trigger a refresh' rather than performing the refresh itself, which is valuable behavioral context. However, it fails to disclose safety characteristics (read-only vs. destructive), authentication requirements, or rate limits that would be essential for a diagnostic tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It is front-loaded with the action verb and covers all three return aspects (ingestion time, staleness, refresh method) without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a zero-parameter tool without an output schema, the description adequately explains what information the tool returns. It covers the three key data points provided by the tool. It could be improved by defining what constitutes 'staleness' in this domain, but it is sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, which establishes a baseline of 4. The description doesn't need to add parameter semantics since there are none to document, and the empty schema makes this clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks three specific things: last ingestion time, staleness status, and refresh trigger methods. It uses specific verbs ('Check') and resources ('data'). However, it doesn't explicitly differentiate from siblings like 'check_deadlines' or 'check_stacking', though the differentiation is somewhat implicit in the specific focus on data ingestion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or when not to use it. While the purpose implies usage (use when concerned about data freshness), there is no explicit comparison to sibling tools or workflow guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_deadlinesAInspect
List open and upcoming grant deadlines, sorted by urgency. Shows days remaining and closing status.
| Name | Required | Description | Default |
|---|---|---|---|
| grant_type | No | Filter by grant type (e.g. capital, revenue) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses sorting behavior ('sorted by urgency') and output content ('days remaining and closing status'), though it omits rate limits, pagination, or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently front-loaded with the action. 'Sorted by urgency' and 'Shows days remaining' both earn their place by conveying behavioral and output details with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately compensates by specifying what data is returned ('days remaining', 'closing status'). For a simple 2-parameter tool, this is sufficient, though mentioning optional parameter behavior would elevate it to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both grant_type and jurisdiction are well-documented in the schema), establishing the baseline. The description adds no parameter-specific details but doesn't need to compensate for coverage gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and resource ('grant deadlines') and distinguishes from siblings like 'search_grants' or 'get_grant_details' by emphasizing the temporal/deadline focus and urgency sorting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through 'List open and upcoming grant deadlines' but lacks explicit when-to-use guidance or contrast with sibling tools like 'search_grants' that could also retrieve grant information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_stackingAInspect
Check whether multiple grants can be combined (stacked). Checks all pair combinations and returns compatibility matrix.
| Name | Required | Description | Default |
|---|---|---|---|
| grant_ids | Yes | Array of grant IDs to check compatibility (minimum 2) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations or output schema, the description carries full behavioral disclosure burden. It successfully discloses the computational approach ('checks all pair combinations') and return format ('compatibility matrix'), but omits error handling, rate limits, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero redundancy: the first establishes purpose, the second discloses implementation details and output format. Every clause delivers essential information that aids tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 simple parameters and no output schema, the description adequately covers the operation's scope and return type ('compatibility matrix'). Slight deduction for not mentioning jurisdiction context or error cases when minimum 2 grants not provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with grant_ids and jurisdiction fully documented in the schema. The description implies the grant_ids parameter through 'multiple grants' but adds no semantic detail beyond what the structured schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (check/stacking) and resource (grants), using precise language ('combined (stacked)') that distinguishes it from sibling tools like search_grants or get_grant_details which handle discovery rather than combinability analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by the purpose statement (use when you need to verify if multiple grants can be combined), but lacks explicit when-to-use boundaries, prerequisites, or guidance on when to use alternatives like get_grant_details for individual grant information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_grant_valueAInspect
Calculate total grant value from selected items. Applies grant cap and calculates match-funding requirement.
| Name | Required | Description | Default |
|---|---|---|---|
| items | No | Array of item codes to include. If omitted, includes all items. | |
| area_ha | No | Area in hectares (for per-hectare payment items like EWCO) | |
| grant_id | Yes | Grant ID (e.g. fetf-2026-productivity) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable calculation logic beyond schema: 'Applies grant cap' and 'calculates match-funding requirement.' However, with no annotations provided, the description fails to disclose safety profile (read-only vs. stateful), error handling for invalid item codes, or whether calculations are cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes core purpose; second sentence specifies calculation nuances (cap and match-funding). Front-loaded and appropriately sized for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for input documentation given complete schema coverage. However, no output schema exists, and description omits return value structure (e.g., whether it returns total amount, breakdown, or match-funding ratio), leaving a gap for an agent expecting calculation results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline applies. Description mentions 'selected items' which aligns with the items parameter, but adds no semantic detail beyond schema regarding the hectare-based calculations or jurisdiction defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Calculate' with clear resource 'grant value' and scope 'from selected items.' Explicitly distinguishes from sibling tools like get_grant_details (retrieval) or search_grants (discovery) by focusing on financial computation and cap application.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through the phrase 'from selected items,' suggesting use after item selection. However, lacks explicit guidance on when to use versus siblings like check_stacking or get_grant_details, and omits prerequisites such as requiring valid grant_id or item codes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_application_processAInspect
Get step-by-step application guidance for a grant, including evidence requirements and portal links.
| Name | Required | Description | Default |
|---|---|---|---|
| grant_id | Yes | Grant ID (e.g. fetf-2026-productivity) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only partially delivers. It discloses return content (step-by-step guidance, evidence requirements, portal links) but omits operational details like data freshness, caching behavior, or whether external portals are queried live.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 12 words with zero waste. Front-loaded with action verb, immediately qualifies scope with 'including evidence requirements and portal links,' providing maximum information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with no output schema, the description adequately covers return semantics by listing the three components returned (steps, evidence, links). Minor gap: doesn't mention jurisdiction affects which portal links/evidence standards are returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both grant_id and jurisdiction are fully documented in the schema). The description adds no parameter-specific context, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'step-by-step application guidance' and distinguishes from siblings like get_grant_details (general metadata) by specifying it returns process steps, evidence requirements, and portal links.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage by defining its unique scope (process guidance vs. deadlines/eligibility/value estimation), it lacks explicit when-to-use guidance or named alternatives for cases where the user just wants deadlines or eligibility criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_eligible_itemsCInspect
List eligible items for a grant with codes, values, and specifications. Filter by category.
| Name | Required | Description | Default |
|---|---|---|---|
| category | No | Filter by item category (e.g. precision, slurry, handling) | |
| grant_id | Yes | Grant ID (e.g. fetf-2026-productivity) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It mentions that items include codes, values, and specifications, but fails to disclose pagination behavior, caching policies, error conditions (e.g., invalid grant_id), or whether the data is real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief with two front-loaded sentences. The first sentence captures the core action and return payload; the second sentence highlights a key filtering capability. While efficient, the second sentence is fragmentary and could be smoother.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description partially compensates by mentioning return content (codes, values, specifications). However, it lacks detail on response structure, pagination, or error handling. For a 3-parameter tool with full schema coverage, the description is adequate but minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description mentions 'Filter by category' which aligns with the schema but adds no additional semantic value beyond what the parameter descriptions already provide. No examples or validation rules are added for grant_id or jurisdiction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists eligible items for a grant and mentions the data returned (codes, values, specifications). It implicitly distinguishes from sibling tools like search_grants (which finds grants) and get_grant_details (which likely returns metadata rather than item lists), though explicit differentiation is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like get_grant_details or estimate_grant_value. The phrase 'Filter by category' implies filtering capability but does not explain use cases, prerequisites, or when this tool is inappropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_grant_detailsAInspect
Get full details for a specific grant scheme: budget, eligibility, deadlines, match funding.
| Name | Required | Description | Default |
|---|---|---|---|
| grant_id | Yes | Grant ID (e.g. fetf-2026-productivity, ewco, cs-higher-tier) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses the semantic content of the response (budget, eligibility, deadlines, match funding) which compensates for the missing output schema. However, it omits other behavioral traits such as error handling (what happens if grant_id is invalid), data freshness, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action ('Get full details'), specifies the resource, and uses a colon-delimited list to enumerate return value categories. There is no redundant or wasted text; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by enumerating the conceptual data domains returned (budget, eligibility, deadlines, match funding). With only two simple parameters and 100% schema coverage, the description provides sufficient context for an agent to understand the tool's scope, though error handling details would improve completeness further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters (grant_id and jurisdiction), establishing a baseline of 3. The description does not add additional semantic context about the parameters (e.g., explaining that grant_id must be obtained from search_grants, or clarifying jurisdiction defaults), but it aligns with the 'specific grant scheme' requirement implied by the grant_id parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'full details for a specific grant scheme' using the verb 'Get' + resource 'grant scheme'. It implicitly distinguishes from sibling 'search_grants' by emphasizing 'specific' (requiring an ID) versus searching, and differentiates from 'check_deadlines' by listing multiple data dimensions (budget, eligibility, deadlines, match funding). However, it does not explicitly name sibling tools to clarify distinctions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the listed return values (budget, eligibility, etc.), suggesting when to use this versus specialized siblings like 'check_deadlines' or 'estimate_grant_value'. However, it lacks explicit guidance on when NOT to use this tool or direct references to alternatives (e.g., 'use search_grants first if you don't have a grant_id').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesAInspect
List all data sources with authority, URL, license, and freshness info.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It partially compensates by disclosing return content (authority, URL, license, freshness fields) but omits other behavioral details like pagination, caching behavior, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action verb and efficiently enumerates return value attributes. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool, the description adequately compensates for missing output schema by specifying the four data fields returned. Appropriately complete given the tool's low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, establishing baseline 4. No parameters require semantic clarification beyond what the empty schema indicates.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('data sources') with specific output fields enumerated (authority, URL, license, freshness). However, it does not explicitly differentiate from sibling 'check_data_freshness', which also deals with data freshness concepts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'check_data_freshness' or when to consult source metadata versus grant details. Lacks prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_grantsAInspect
Search UK farm grants by keyword. Covers FETF, Capital Grants, EWCO, Countryside Stewardship, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 20, max: 50) | |
| query | Yes | Free-text search query (e.g. "slurry equipment", "woodland creation") | |
| min_value | No | Minimum grant value in GBP | |
| grant_type | No | Filter by grant type (e.g. capital, revenue) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. While it covers data scope (FETF, Capital Grants, etc.), it fails to mention safety profile (read-only vs destructive), return format, pagination behavior beyond the limit parameter, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, both earning their place: first establishes core function, second establishes coverage scope. Front-loaded with the action verb and appropriately sized with no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the straightforward search pattern and complete schema coverage, the description provides adequate domain context (UK-specific, named schemes). Minor gap: without an output schema, the description could have clarified that it returns a list/summary of grants rather than full details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'by keyword' which aligns with the query parameter, but does not add syntax details, examples, or constraints beyond what the schema already provides for any of the 5 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Search'), clear resource ('UK farm grants'), and mechanism ('by keyword'). Listing specific schemes (FETF, EWCO, Countryside Stewardship) effectively distinguishes this from sibling tools like get_grant_details or check_deadlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'by keyword' implies discovery/discovery usage, but there is no explicit guidance on when to use this versus get_grant_details (for specific records) or estimate_grant_value (for calculations). No 'when-not-to-use' or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.