Server Details
UK crop nutrient recommendations — RB209 NPK planning, soil types, commodity prices
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Ansvar-Systems/uk-crop-nutrients-mcp
- GitHub Stars
- 0
See and control every tool call
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored.
Tools are largely distinct with clear boundaries. Minor overlap exists between search_crop_requirements (broad search) and specific getters like get_crop_details/get_nutrient_plan, but the descriptions clarify their distinct use cases. The about and check_data_freshness tools also have slight thematic overlap regarding metadata, but serve different purposes.
Strong snake_case convention throughout with consistent verb_noun patterns (calculate_margin, get_crop_details, list_crops). The 'about' tool breaks this pattern as a standalone noun, though this is a common convention for server metadata. All other tools follow predictable get/list/search/calculate prefixes appropriate to their actions.
Ten tools is well-suited for the UK crop nutrition domain. The set includes discovery (list_crops, list_sources), reference data (get_crop_details, get_soil_classification), advisory (get_nutrient_plan), economics (get_commodity_price, calculate_margin), and maintenance (check_data_freshness, about) without bloat. Each tool earns its place in the fertiliser planning workflow.
Covers the core crop nutrient lifecycle well: crop lookup, soil classification, RB209-based NPK recommendations, and economic context via pricing and margins. Data provenance (list_sources, check_data_freshness) adds transparency. Minor gap: no specific fertiliser product matching (converting NPK requirements to actual product rates), though this may be considered out of scope for a pure advisory server.
Available Tools
10 toolsaboutAInspect
Get server metadata: name, version, coverage, data sources, and links.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Compensates by disclosing exactly what data is returned (the five metadata fields), effectively documenting output shape in lieu of output schema. Does not mention rate limits or caching, but appropriately describes the data contract.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with colon-separated list of return fields. No filler words; every token earns its place by conveying specific metadata categories retrieved.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description achieves completeness by enumerating the exact metadata fields returned. Sufficient for a simple discovery endpoint with no side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters per schema. Baseline score of 4 applies as there are no parameters requiring semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' plus exact resource 'server metadata' and enumerates returned fields (name, version, coverage, data sources, links). Clearly distinguishes from agricultural siblings (get_crop_details, calculate_margin, etc.) as the only introspection/metadata tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives mentioned, but implied context is clear as the sole metadata discovery tool. Lacks explicit guidance on when to call this versus operational tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_marginBInspect
Estimate gross margin for a crop. Uses current commodity price if price_per_tonne not provided.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | Yes | Crop ID or name | |
| yield_t_ha | Yes | Expected yield in tonnes per hectare | |
| input_costs | No | Total input costs per hectare (GBP). Default: 0 | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| price_per_tonne | No | Override price (GBP/t). If omitted, uses latest market price. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, placing full disclosure burden on the description. It mentions using 'current commodity price' but fails to clarify data freshness, calculation methodology, currency confirmation (though implied by schema), or safety characteristics (read-only vs destructive).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. First sentence establishes purpose; second provides critical parameter guidance. No redundancy or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 100% schema coverage, input parameters are well-documented elsewhere. However, lacking an output schema, the description omits what the tool returns (margin format, units, accompanying metadata) and omits behavioral constraints given the absence of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description repeats the schema's explanation of price_per_tonne fallback behavior without adding new semantic context (e.g., valid date ranges, price formats) beyond what the structured schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Estimate') and resource ('gross margin for a crop'). However, it does not explicitly distinguish when to use this calculation tool versus sibling retrieval tools like 'get_commodity_price', though the price fallback mechanism implies a relationship.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides conditional usage guidance for the price_per_tonne parameter ('if...not provided'), indicating when the tool falls back to market data. However, it lacks explicit 'when-not-to-use' guidance or named alternatives to this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_data_freshnessAInspect
Check when data was last ingested, staleness status, and how to trigger a refresh.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adds useful behavioral context by clarifying that the tool returns refresh trigger instructions, not just a boolean status. However, it omits whether this operation is expensive, cached, or what format the staleness status takes (enum, timestamp, boolean).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action verb and packs three distinct informational elements (ingestion time, staleness, refresh method) without redundancy. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter diagnostic tool without an output schema, the description adequately conveys the semantic content of the return value (timestamp, status, and procedural guidance). While an explicit output schema would clarify structure, the description provides sufficient conceptual completeness for an agent to understand what intelligence it will receive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4 per the evaluation rules. The description correctly implies no filtering or configuration is needed by omitting parameter references, consistent with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies three specific functions: checking ingestion timestamp, staleness status, and refresh triggers. It uses specific verbs ('Check') and identifies the resource ('data'). However, it could better scope which data (e.g., crop vs. commodity) given the diverse sibling tools, and distinguish itself more explicitly from the data retrieval siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The mention of 'how to trigger a refresh' implies a diagnostic workflow (check status before refreshing), providing implicit usage context. However, it lacks explicit guidance on when to prefer this over simply calling data retrieval tools directly, or whether this should be used proactively vs. reactively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_commodity_priceAInspect
Get latest commodity price for a crop with source attribution. Warns if data is stale (>14 days).
| Name | Required | Description | Default |
|---|---|---|---|
| crop | Yes | Crop ID or name | |
| market | No | Market type (e.g. ex-farm, delivered) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden. It successfully discloses the 14-day staleness threshold and source attribution behavior, but lacks details on error handling (e.g., invalid crop IDs), response format, or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose and scope, second provides critical data quality context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple retrieval tool, but gaps remain given the absence of both annotations and output schema. The description omits return value structure, error scenarios, and differentiation from the specialized 'check_data_freshness' sibling tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'crop' implicitly but adds no supplementary context about parameter formats, valid values for 'market', or the default jurisdiction behavior beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves latest commodity prices for crops and mentions source attribution. However, it does not explicitly differentiate from sibling tools like 'calculate_margin' or 'check_data_freshness' that may operate on similar data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The staleness warning (>14 days) provides implicit guidance about data quality expectations, but there is no explicit guidance on when to use this versus 'check_data_freshness' or other price-related siblings, nor any prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_crop_detailsAInspect
Get full profile for a crop: nutrient offtake, typical yields, growth stages.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | Yes | Crop ID or name | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It partially compensates by disclosing the return data structure (nutrient offtake, yields, growth stages), which is crucial given the lack of output schema. However, it omits operational details like rate limits, caching behavior, error conditions, or data freshness guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action ('Get full profile') and uses a colon to specify the payload contents. There is no redundant or wasted text; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 primitive parameters, no nesting) and lack of output schema, the description adequately compensates by listing the three key data categories returned. It successfully conveys what the tool does and returns, though explicit guidance on the jurisdiction parameter's default behavior (implied in schema but not description) would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage (both 'crop' and 'jurisdiction' are documented in the schema), establishing a baseline of 3. The description does not add semantic context beyond the schema (e.g., no guidance on jurisdiction defaults or crop ID formats), but the high schema coverage means no additional compensation is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resource ('full profile for a crop') and enumerates the specific data categories returned (nutrient offtake, yields, growth stages). The 'full profile' framing effectively distinguishes this from sibling tools like 'list_crops' (enumeration) and 'search_crop_requirements' (filtered search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the 'full profile' concept, suggesting this tool retrieves comprehensive data for a specific crop versus listing or searching. However, it lacks explicit when-to-use guidance or named alternatives (e.g., no mention of when to use 'search_crop_requirements' instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_nutrient_planAInspect
Get NPK fertiliser recommendation for a specific crop and soil type. Based on AHDB RB209.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | Yes | Crop ID or name (e.g. winter-wheat) | |
| sns_index | No | Soil Nitrogen Supply index (0-6) | |
| soil_type | Yes | Soil type ID or name (e.g. heavy-clay) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| previous_crop | No | Previous crop group for rotation adjustment |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds data provenance (AHDB RB209) but omits critical operational details: read-only nature, error handling for invalid crop/soil combinations, or response structure (no output schema exists to cover this).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with action and resource, second sentence provides essential provenance context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a 5-parameter recommendation tool. Mentions required inputs and output type but ignores optional parameters (jurisdiction, sns_index, previous_crop) that significantly affect recommendations. No output schema exists, though description partially compensates by specifying 'NPK fertiliser recommendation'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description mentions 'crop and soil type' aligning with required parameters, but adds no syntax details, format examples, or clarification on how optional params (sns_index, previous_crop) modify recommendations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with clear resource 'NPK fertiliser recommendation' and scope (specific crop/soil type). Distinguishes from siblings like get_crop_details (general info) and get_soil_classification (taxonomy) by specifying the nutrient planning output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through specificity (NPK recommendations) and cites authority (AHDB RB209), but lacks explicit when-to-use guidance versus search_crop_requirements or get_crop_details which may overlap in crop data domain.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_soil_classificationBInspect
Get soil group, characteristics, and drainage class for a soil type or texture.
| Name | Required | Description | Default |
|---|---|---|---|
| texture | No | Soil texture (e.g. clay, sand, loam) | |
| soil_type | No | Soil type ID or name | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses what data is returned (group, characteristics, drainage class) which helps compensate for the missing output schema, but fails to mention safety (read-only vs destructive), error handling for invalid soil types, or rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficient sentence of 11 words with no redundancy. Information is front-loaded with the verb 'Get' and immediately specifies both the resource and return payload.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple lookup tool but lacks clarification on parameter requirements (all 3 are optional). The mention of return fields partially compensates for no output schema, though it should explicitly state that at least one identifier (texture or soil_type) should be provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While schema coverage is 100%, the description adds value by framing texture and soil_type as alternative identifiers ('soil type or texture'), implying the query logic. However, it doesn't clarify that all parameters are optional or explain the GB default for jurisdiction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific outputs (soil group, characteristics, drainage class) and inputs (soil type or texture) clearly. However, it doesn't explicitly differentiate from related sibling tools like get_crop_details that might return overlapping agricultural data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus get_crop_details or search_crop_requirements. Doesn't clarify whether texture and soil_type are mutually exclusive or if both can be provided, nor when the jurisdiction parameter is necessary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_cropsBInspect
List all crops in the database, optionally filtered by crop group.
| Name | Required | Description | Default |
|---|---|---|---|
| crop_group | No | Filter by crop group (e.g. cereals) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It fails to mention the default jurisdiction behavior (GB), potential pagination for 'all crops', or what constitutes a valid 'crop_group' value. The agent knows only that it returns a list.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of 11 words. It is front-loaded with the action ('List') and includes only essential qualifying information ('optionally filtered by crop group'). Zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with 2 parameters and 100% schema coverage, the description is minimally viable. However, given the lack of annotations and output schema, it should ideally disclose return format (array vs object), pagination behavior, or the significance of the default jurisdiction.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions the 'crop_group' filter but completely omits the 'jurisdiction' parameter despite its significant default value. It does not add syntax details or valid value ranges beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'crops' with scope 'in the database'. The phrase 'all crops' implicitly distinguishes from sibling 'get_crop_details' (singular retrieval), but could be more explicit about when to choose this over 'search_crop_requirements'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions optional filtering by crop group but provides no explicit guidance on when to use this tool versus siblings like 'get_crop_details' or 'search_crop_requirements'. No prerequisites or exclusion criteria are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesAInspect
List all data sources with authority, URL, license, and freshness info.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates partially by specifying the data fields returned (authority, URL, license, freshness), effectively substituting for a missing output schema. However, it omits safety characteristics (read-only status), rate limits, or pagination behavior that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence (9 words) that front-loads the action verb. Every word contributes essential information about either the operation type or the returned data structure with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero inputs) and lack of output schema, the description adequately compensates by detailing the return values. For a metadata discovery tool with no parameters, this level of description is sufficient for correct invocation, though an output schema would be preferable for comprehensive completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and 100% schema description coverage (vacuously true for empty schema). The baseline score of 4 applies as there are no parameters requiring semantic clarification beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and resource ('data sources') and enumerates specific fields returned (authority, URL, license, freshness). It clearly distinguishes from sibling 'list_crops' by specifying 'data sources' rather than agricultural entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when to use this tool versus alternatives like 'check_data_freshness' or 'about', nor does it mention prerequisites or filtering capabilities. Usage must be inferred solely from the purpose statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_crop_requirementsBInspect
Search crop nutrient requirements, soil data, and recommendations. Use for broad queries about crops and nutrients.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 20, max: 50) | |
| query | Yes | Free-text search query | |
| crop_group | No | Filter by crop group (e.g. cereals, oilseeds) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers none. It omits critical details: data source freshness (relevant given the sibling 'check_data_freshness'), pagination behavior, result ranking logic, or whether results include raw data versus processed recommendations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently structured with function first, then usage context. No redundant words, though brevity comes at the cost of omitting behavioral details and sibling distinctions that would aid agent selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, plus a crowded sibling namespace with overlapping concerns (get_crop_details, get_nutrient_plan, get_soil_classification), the description is insufficient. It fails to explain return value structure or clarify how this search tool complements the specific retrieval tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all 4 parameters documented), establishing a baseline of 3. The description adds no supplementary parameter guidance (e.g., query syntax tips, jurisdiction default implications, or valid crop_group values beyond the schema example).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's function (searching crop nutrient requirements, soil data, and recommendations) and scope (broad queries). However, it lacks explicit differentiation from siblings like 'get_crop_details' or 'get_nutrient_plan' which also retrieve crop information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Use for broad queries about crops and nutrients' provides implied usage guidance, but fails to specify when NOT to use this tool (e.g., for specific crop IDs) or explicitly name alternative tools like 'get_crop_details' for targeted lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.