Server Details
UK organic and regenerative farming — certification, cover crops, soil health, BNG
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Ansvar-Systems/uk-organic-regen-mcp
- GitHub Stars
- 0
See and control every tool call
Tool Definition Quality
Average 3.4/5 across 10 of 10 tools scored.
Tools are mostly distinct by topic (biodiversity vs soil vs cover crops vs certification). The main ambiguity is between search_organic_guidance (broad queries) and specific getters like get_soil_health_guidance, though the 'Use for broad queries' hint helps distinguish them.
Nine tools follow a consistent verb_noun snake_case pattern (get_*, check_*, list_*, search_*). The outlier 'about' breaks convention by omitting a verb prefix, though its purpose is clear.
Ten tools is an appropriate scope for UK organic/regenerative farming guidance. The set balances meta utilities (about, list_sources, check_data_freshness), specific domain getters (6 topics), and a search fallback without bloat.
Covers the core organic farming lifecycle: conversion process, certification standards (multiple bodies), permitted inputs, and key regenerative practices (soil health, cover crops, biodiversity). Minor gaps might include specific livestock welfare standards or carbon/sequestration calculators, but the surface supports main advisory workflows.
Available Tools
10 toolsaboutAInspect
Get server metadata: name, version, coverage, data sources, and links.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lists return fields (name, version, coverage, data sources, links) which provides some behavioral context, but omits auth requirements, rate limits, return format (JSON?), or caching behavior. Adequate but minimal for a read-only metadata endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb. Every element serves a purpose: action ('Get'), resource ('server metadata'), and specific content enumerated. No filler words or redundant phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for tool complexity. With no output schema provided, the description compensates by listing the five specific metadata fields returned. Could improve by mentioning return format or authentication requirements, but sufficient for a zero-parameter discovery endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters per schema. As per guidelines, 0 params = baseline 4. Description correctly implies no inputs needed by focusing entirely on outputs, matching the empty input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with clear resource 'server metadata' and enumerates exact fields returned (name, version, coverage, data sources, links). Clearly distinguishes from domain-specific siblings like get_biodiversity_guidance or search_organic_guidance by focusing on server metadata rather than agricultural data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through content (server metadata vs. data queries), but lacks explicit when-to-use guidance. Does not mention that this should be called first to discover available data sources before using specific guidance tools, or contrast with list_sources which likely returns content-specific sources.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_data_freshnessAInspect
Check when data was last ingested, staleness status, and how to trigger a refresh.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses domain concepts (staleness, refresh capability) but omits operational details like read-only safety, idempotency, rate limits, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently packs three distinct information domains (ingestion time, staleness status, refresh method) with no filler words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameter-less metadata tool, the description adequately compensates for missing output schema by specifying the three data points returned (timestamp, status, refresh info).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline 4 per rubric. No parameter description needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool checks data ingestion timestamps, staleness status, and refresh triggers—distinguishing it from farming-guidance siblings. Minor ambiguity exists whether it actually triggers refresh or merely returns instructions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this versus querying the data directly or which sibling tools might return fresher data. No prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_biodiversity_guidanceBInspect
Get biodiversity net gain guidance: BNG units, creation costs, management obligations, and grant options.
| Name | Required | Description | Default |
|---|---|---|---|
| farm_feature | No | Farm feature filter (e.g. field_margin, in_field, boundary) | |
| habitat_type | No | Habitat type (e.g. wildflower_meadow, hedgerow, woodland, pond) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It lists content areas but fails to describe behavioral traits such as whether results are filtered by the optional parameters, what happens when no parameters are provided, return format, or any rate limiting. The 'Get' verb implies read-only access, but this is not explicitly confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Get biodiversity net gain guidance') and uses a colon-delimited list to specify scope. There is no redundant or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 optional parameters and no output schema or annotations, the description adequately covers the domain scope of the guidance returned. However, it lacks information about default behavior when filters are omitted or the structure of the returned guidance, leaving minor gaps in contextual completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (farm_feature, habitat_type, and jurisdiction are all documented). The description does not add parameter-specific semantics, but since the schema fully documents all three parameters, the baseline score of 3 is appropriate without additional penalty.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'biodiversity net gain guidance' and specifies the exact content domains covered (BNG units, creation costs, management obligations, grant options). This implicitly distinguishes it from sibling guidance tools like get_cover_crop_guidance or get_soil_health_guidance by using BNG-specific terminology, though it doesn't explicitly contrast with alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus sibling alternatives (e.g., when to use this vs. get_soil_health_guidance or get_cover_crop_guidance). It does not mention prerequisites, required context, or exclusion criteria for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_conversion_processBInspect
Get organic conversion timeline, marketing options, and support for a farm type.
| Name | Required | Description | Default |
|---|---|---|---|
| farm_type | Yes | Farm type (e.g. arable, permanent_grassland, permanent_crops, cattle, sheep_pigs, poultry) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| current_system | No | Current farming system (e.g. conventional, low-input) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It implies a read-only operation via 'Get' and hints at return content (timeline, options, support), but lacks disclosure on authentication needs, rate limits, error handling, or response structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with verb and resources, no redundancy. However, given the lack of annotations and output schema, it may be overly terse—one additional sentence on usage context would improve agent decision-making without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 3-parameter lookup tool, the description covers the core purpose but leaves gaps in usage guidance and behavioral details. Without annotations or an output schema, it minimally meets requirements but does not fully compensate for missing structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all three parameters (farm_type, jurisdiction, current_system) fully documented. The description references 'farm type' aligning with the required parameter, but appropriately relies on the schema for detailed parameter semantics given the high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states specific resources retrieved (organic conversion timeline, marketing options, support) and the target resource (farm type). It distinguishes from siblings like get_biodiversity_guidance or get_organic_standards by focusing specifically on 'conversion' processes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like search_organic_guidance or get_organic_standards. No mention of prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cover_crop_guidanceBInspect
Get cover crop species recommendations with N fixation rates, biomass, sowing windows, and best preceding crop.
| Name | Required | Description | Default |
|---|---|---|---|
| season | No | Sowing season filter (e.g. autumn, spring, summer) | |
| purpose | No | Purpose: nitrogen_fixation, biomass, biofumigation, pollinator, compaction_relief, weed_suppression | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| following_crop | No | Following crop to find suitable cover crops for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It partially succeeds by listing the data attributes returned (N fixation rates, biomass, etc.), giving agents insight into output structure despite the lack of formal output schema. However, it omits operational details like rate limits, caching behavior, or what happens when no recommendations match the filters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Get cover crop species recommendations') and follows with specific attributes. The verb 'Get' is slightly generic, but every clause serves to specify the return value content. No redundant or obvious information is included.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given four optional parameters and no output schema, the description adequately compensates by enumerating the agronomic data points returned (N fixation, biomass, etc.). This gives agents sufficient context to understand the tool's value without an explicit output schema, though noting that all parameters are optional would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description adds value by clarifying that the 'purpose' parameter relates to agronomic outcomes like nitrogen fixation and biomass, and that recommendations include 'best preceding crop' data (clarifying the relationship to the 'following_crop' parameter). However, it does not add syntax details or example values beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves cover crop species recommendations and lists specific agronomic attributes included (N fixation rates, biomass, sowing windows, preceding crop suitability). While it identifies the specific resource domain (cover crops), it does not explicitly differentiate from sibling 'guidance' tools like get_biodiversity_guidance or get_soil_health_guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisite conditions, required context (e.g., knowing the following crop), or when to prefer search_organic_guidance instead. All parameters are optional, yet the description does not indicate this flexibility.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_organic_standardsAInspect
Get organic certification requirements by product type and certification body. Covers Soil Association, OF&G, and EU baseline.
| Name | Required | Description | Default |
|---|---|---|---|
| standard | No | Certification body name filter (e.g. Soil Association, OF&G, EU Baseline) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| product_type | Yes | Product type (e.g. arable, dairy, beef_sheep, poultry, horticulture, processing) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable data scope context by listing specific certification bodies covered, but omits operational details such as read-only safety confirmation, error handling for invalid product types, or response format expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is optimally front-loaded: the first sentence establishes the core action and filtering capabilities, while the second specifies data coverage. No sentences are wasted on tautology or redundant schema information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's straightforward retrieval purpose, three well-documented parameters, and lack of output schema, the description provides adequate domain context. It could be improved by indicating the return format (structured vs. text), but sufficiently covers the certification scope necessary for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing comprehensive parameter documentation. The description reinforces these semantics by mentioning 'product type and certification body' in the text and providing concrete examples of valid certification body values (Soil Association, OF&G, EU baseline), meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'organic certification requirements' using specific dimensions (product type and certification body). It explicitly lists covered certification bodies (Soil Association, OF&G, EU baseline), effectively distinguishing it from sibling guidance tools like get_biodiversity_guidance or search_organic_guidance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage through its specific resource scope (certification requirements vs. general guidance), it lacks explicit when-to-use guidance or references to alternatives like search_organic_guidance for broader queries. The user must infer applicability from the resource type alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_permitted_inputsBInspect
Get permitted substances for organic farming by input type. Includes conditions, limits, and derogation status.
| Name | Required | Description | Default |
|---|---|---|---|
| input_type | Yes | Input type: fertiliser, plant_protection, or feed_additive | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| crop_or_species | No | Optional crop or species context |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It compensates partially by listing return value characteristics (conditions, limits, derogation status), but omits operational details like authentication requirements, caching behavior, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently structured: first establishes the core function, second specifies return value details. No redundant or wasteful language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately compensates by disclosing the nature of returned data (conditions, limits, derogation). All parameters are documented in the schema, making this sufficiently complete for a regulatory lookup tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all three parameters (input_type, jurisdiction, crop_or_species) fully documented. The description mentions 'by input type' but adds no semantic meaning beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'permitted substances for organic farming' using the specific key 'input type'. However, it does not explicitly differentiate from sibling 'get_organic_standards' which may overlap conceptually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'search_organic_guidance' or 'get_organic_standards', nor are prerequisites or exclusion criteria mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_soil_health_guidanceBInspect
Get soil health indicator targets, measurement methods, management practices, and improvement timelines.
| Name | Required | Description | Default |
|---|---|---|---|
| indicator | No | Soil health indicator (e.g. organic_matter, earthworm_count, pH, bulk_density, water_infiltration) | |
| soil_type | No | Soil type for context-specific targets (e.g. clay, sand, loam) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full disclosure burden. While it lists what is returned, it fails to indicate data source authority, freshness (relevant given sibling check_data_freshness exists), whether results are cached, or if the tool has side effects. 'Get' implies read-only but this is not confirmed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, dense sentence of nine words that front-loads the action ('Get soil health...') and immediately enumerates the four output categories. Zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description adequately enumerates return value categories (targets, methods, practices, timelines). However, it omits that all parameters are optional and that jurisdiction defaults to 'GB', which would help agents understand minimal invocation requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured data already documents all three parameters fully. The description mentions 'soil health indicator targets' which loosely maps to the indicator parameter, but adds no syntax details, validation rules, or usage examples beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and enumerates four specific deliverables: targets, measurement methods, management practices, and improvement timelines. It distinguishes from siblings like get_biodiversity_guidance and get_cover_crop_guidance by focusing specifically on soil health indicators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the numerous sibling guidance tools (get_biodiversity_guidance, get_cover_crop_guidance, get_organic_standards, etc.). It does not indicate prerequisites, required vs optional parameters, or when search_organic_guidance might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesAInspect
List all data sources with authority, URL, license, and freshness info.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates what data is returned (authority, URL, license, freshness info), but omits details about pagination, caching, performance characteristics, or the specific format of the freshness data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action ('List all data sources') and immediately qualifies it with the specific metadata fields returned. No words are wasted.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates adequately by listing the key fields returned (authority, URL, license, freshness). For a zero-parameter read operation, this provides sufficient context for an agent to understand what information will be retrieved.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and the schema is empty. Per the evaluation guidelines, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to describe beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists data sources and specifies the returned metadata fields (authority, URL, license, freshness). However, it does not explicitly differentiate from the sibling tool 'check_data_freshness', which could conceptually overlap since this tool also returns freshness information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'check_data_freshness' or 'about'. There are no stated prerequisites, exclusions, or conditions for invocation despite the potential for confusion with sibling operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_organic_guidanceBInspect
Search organic farming standards, regenerative practices, cover crops, soil health, and biodiversity guidance. Use for broad queries.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 20, max: 50) | |
| query | Yes | Free-text search query | |
| topic | No | Filter by topic (e.g. organic_standards, cover_crops, soil_health, biodiversity, conversion, permitted_inputs) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, yet the description fails to disclose return format (documents, snippets, summaries?), data source limitations, or result ordering. It only states the search domain without behavioral context the agent needs to interpret results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no redundancy. The first enumerates searchable domains; the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation given complete schema coverage, but incomplete regarding the tool's ecosystem role. With multiple specialized sibling tools available, the description should more explicitly delineate the boundary between this search tool and specific guidance retrievers.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, documenting query, topic filters, jurisdiction format, and limits. The description adds no parameter-specific semantics beyond what the schema already provides, meeting the baseline expectation for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a search function across multiple agricultural domains (organic standards, regenerative practices, etc.). It implies distinction from sibling 'get_' tools via the 'broad queries' qualifier, though it does not explicitly name those alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides minimal guidance with 'Use for broad queries' but lacks explicit when-not-to-use conditions or named alternatives. Given the presence of specific getters like get_biodiversity_guidance, the description should explicitly clarify when to use search versus targeted retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.