Server Details
UK pest, disease, and weed management — symptom diagnosis, IPM, approved products
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Ansvar-Systems/uk-pest-management-mcp
- GitHub Stars
- 0
See and control every tool call
Tool Definition Quality
Average 3.3/5 across 10 of 10 tools scored.
Tools are generally well-differentiated by function. Minor overlap exists between `get_treatments` and `get_ipm_guidance` (both address control methods), and the three metadata tools (`about`, `check_data_freshness`, `list_sources`) serve related but distinct purposes. The distinction between `search_pests` (name-based) and `identify_from_symptoms` (diagnostic) is clear.
Strong adherence to verb_noun convention throughout (`get_pest_details`, `search_crop_threats`, `list_sources`). The outlier `about` breaks the pattern (should be `get_server_info` or similar). `identify_from_symptoms` uses a preposition but remains readable and consistent in style.
Ten tools is ideal for this domain: 3 for data provenance/metadata, 3 for pest discovery/search, and 4 for detailed guidance and treatments. The scope is well-focused on UK agricultural pest reference data without bloat or trivial fragmentation.
Excellent coverage of the reference data lifecycle: pest identification (symptom-based and search), crop threat assessment, detailed profiles, treatment options (chemical/biological/cultural), approved products lookup, and IPM strategy guidance. Minor gap: no retrieval of specific product details by ID, only search.
Available Tools
10 toolsaboutAInspect
Get server metadata: name, version, coverage, data sources, and links.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses what data is returned (name, version, coverage, etc.) but omits operational details like caching behavior, authentication requirements, or idempotency that would be useful for a metadata endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence that front-loads the action and precisely lists the metadata fields returned without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity and lack of output schema, the description adequately compensates by enumerating the specific metadata fields returned (name, version, coverage, data sources, links), providing sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per the rubric establishes a baseline of 4. No additional parameter semantics are needed or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('server metadata') and clearly distinguishes this tool from its agricultural-data siblings by focusing on server properties rather than crop/pest data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the distinction between server metadata and domain data is implied by the content, the description lacks explicit guidance on when to invoke this tool (e.g., 'call first to verify version') or prerequisites relative to other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_data_freshnessBInspect
Check when data was last ingested, staleness status, and how to trigger a refresh.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It adequately previews the three information categories returned (ingestion time, staleness, refresh method) but omits operational details like whether checking freshness has side effects, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence efficiently packs three distinct information domains (timestamp, status, refresh mechanism) without redundancy. Every clause serves a descriptive purpose and the front-loaded structure immediately communicates the tool's investigative nature.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description partially compensates by outlining expected return content. However, it fails to specify which 'data' is being referenced (presumably the agricultural dataset from sibling tools) or the format of the staleness indicator.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per evaluation rules, zero-parameter tools receive a baseline score of 4. The description correctly implies no filtering or configuration is needed for this check.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves ingestion timestamp, staleness status, and refresh instructions using specific verbs. It implicitly distinguishes itself from agricultural data siblings (get_pest_details, etc.) by focusing on metadata rather than domain content, though explicit differentiation is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists what information is returned but provides no guidance on when to invoke this tool versus alternatives. It does not indicate whether this should be called before querying agricultural data or during error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_approved_productsCInspect
Search UK-approved pesticide products by active substance, target pest, or crop.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | No | Filter by approved crop | |
| target_pest | No | Filter by target pest name | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| active_substance | No | Filter by active substance (e.g. prothioconazole) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only implies a read-only operation via the word 'Search' but fails to disclose return format, pagination behavior, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is appropriately front-loaded with the action verb and contains no redundant words. However, given the lack of annotations and output schema, extreme brevity becomes a liability rather than a virtue.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the 100% schema coverage compensates for parameter documentation, the description is incomplete regarding return values (no output schema exists) and behavioral context. For a 4-parameter search tool with no annotations, the description meets minimum viability but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description aligns with the schema by listing the three filterable dimensions (active substance, target pest, crop) but does not add syntax details, format constraints, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific resource (UK-approved pesticide products) and the searchable dimensions (active substance, target pest, crop). It uses a specific action verb ('Search'). However, it does not explicitly differentiate this tool from siblings like 'get_treatments' or 'search_pests'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides geographic context ('UK-approved') but offers no explicit guidance on when to use this tool versus alternatives like 'get_treatments' or 'get_ipm_guidance'. It does not mention prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ipm_guidanceCInspect
Get integrated pest management guidance for a crop: thresholds, monitoring, cultural controls.
| Name | Required | Description | Default |
|---|---|---|---|
| crop_id | Yes | Crop ID (e.g. winter-wheat) | |
| pest_id | No | Optional pest ID to narrow guidance | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. While it lists the content types returned (thresholds, monitoring, cultural controls), it fails to indicate whether this is a read-only lookup, if data is real-time or cached, or any jurisdiction-specific limitations beyond the parameter default.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence with a colon-separated list of return values. It is appropriately front-loaded with the core action and resource. It could accommodate one additional sentence for usage context without becoming verbose, but there is no wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a lookup tool with three simple parameters and no output schema, the description adequately hints at the return structure by listing content types (thresholds, monitoring, cultural controls). However, it misses opportunity to clarify the jurisdiction scoping (GB default) or the value of pest_id filtering, which would help agents understand the guidance granularity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score is 3. The description adds no parameter-specific context (e.g., explaining that pest_id narrows the guidance scope, or that jurisdiction defaults to GB), but the schema adequately documents all three parameters without requiring additional description support.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'integrated pest management guidance' and specifies the content types returned (thresholds, monitoring, cultural controls). It effectively distinguishes this from sibling tools like get_treatments (which implies chemical interventions) and get_pest_details (which implies biological information), though it doesn't explicitly name these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_treatments, get_pest_details, or search_crop_threats. It doesn't mention prerequisites, such as needing a valid crop_id from a previous search, or when the optional pest_id parameter should be utilized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pest_detailsBInspect
Get full pest profile: identification, lifecycle, symptoms, crops affected.
| Name | Required | Description | Default |
|---|---|---|---|
| pest_id | Yes | Pest ID (e.g. septoria-tritici, blackgrass) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With zero annotations, the description carries full disclosure burden. It compensates partially by previewing return data categories (identification, lifecycle, symptoms, crops affected), but omits safety characteristics, data freshness, or error conditions. 'Get' implies read-only, but this isn't explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient single sentence with colon-separated list. Front-loaded with the action and scope, zero redundant words. Every element earns its place by describing either the operation or the return payload structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter retrieval tool without output schema, the description adequately compensates by listing the four data categories returned. It appropriately relies on the schema for parameter documentation. Could be improved by noting this requires an exact pest_id versus fuzzy matching.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with pest_id providing helpful examples (septoria-tritici, blackgrass) and jurisdiction documenting the ISO standard and default value. The description adds no parameter-specific guidance beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('full pest profile') and enumerates specific content areas (identification, lifecycle, symptoms, crops affected). It implies this is a detailed lookup tool versus sibling search/identification tools, though it doesn't explicitly name the alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. The agent must infer from the 'full profile' language that this requires a specific pest_id, but the description doesn't contrast this with search_pests or identify_from_symptoms siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_treatmentsBInspect
Get treatment options for a pest: chemical, cultural, and biological approaches.
| Name | Required | Description | Default |
|---|---|---|---|
| pest_id | Yes | Pest ID | |
| approach | No | Filter by approach: chemical, cultural, or biological | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to indicate whether this is a read-only operation, what happens if the pest_id is invalid, the return format, or that jurisdiction defaults to GB. The mention of the three approaches adds domain context but not behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the primary action. Every word earns its place—the colon structure efficiently enumerates the treatment types without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 3-parameter schema with full coverage and no output schema, the description is minimally adequate. However, it lacks important context about return values, the default jurisdiction behavior mentioned in the schema, and how this tool relates to the broader pest management workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds marginal value by reinforcing that 'chemical, cultural, and biological' are the valid approach values (implied domain semantics), but doesn't elaborate on the jurisdiction parameter's default behavior or pest_id format requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('treatment options') with specific scope ('chemical, cultural, and biological approaches'). However, it doesn't explicitly differentiate from sibling tools like get_approved_products or get_ipm_guidance, which could overlap conceptually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_approved_products or get_ipm_guidance. It doesn't mention prerequisites (e.g., needing a valid pest_id from search_pests) or when to use the approach filter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
identify_from_symptomsAInspect
Symptom-based differential diagnosis. Describe what you see and get ranked pest/disease matches with confidence scores.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | No | Crop being assessed (for context) | |
| symptoms | Yes | Description of observed symptoms (e.g. "yellow patches on lower leaves") | |
| plant_part | No | Affected plant part (e.g. leaves, stem, roots) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses output behavior ('ranked' matches, 'confidence scores'), but omits operational details like whether this is read-only, rate limits, or what happens when no matches are found. It adds some value but not comprehensive behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first establishes purpose, the second covers input method and output format. Information is front-loaded with 'Symptom-based differential diagnosis' and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 simple parameters and no output schema, the description adequately compensates by explaining return values (ranked matches with confidence scores). It lacks only minor operational details (read-only status, error conditions) to be fully complete for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds 'Describe what you see' which semantically reinforces the observational nature of the 'symptoms' parameter, but does not elaborate on other parameters (crop, plant_part, jurisdiction) beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific terminology 'Symptom-based differential diagnosis' and clearly identifies the resource (pest/disease matches with confidence scores). It effectively distinguishes from sibling tools like 'search_pests' or 'get_pest_details' by emphasizing the diagnostic/matching nature rather than simple retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Describe what you see' implies visual symptom observation is required, but there is no explicit guidance on when to use this versus sibling alternatives like 'search_pests' or 'search_crop_threats', nor any exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesAInspect
List all data sources with authority, URL, license, and freshness info.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It reveals output content (authority, URL, license, freshness fields) but omits safety profile, rate limits, pagination behavior, or whether 'all' implies unfiltered bulk retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient single sentence (11 words) with zero waste. Information is front-loaded with action verb first, followed by scope and return value details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a zero-parameter list operation. Compensates for missing output schema by enumerating expected return fields (authority, URL, license, freshness). Would benefit from noting if results are paginated or cached.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline 4 per rubric. No parameter documentation needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('List') and resource ('data sources') with specific scope ('all'). The inclusion of metadata fields (authority, URL, license, freshness) helps distinguish this from sibling check_data_freshness, though it doesn't explicitly state when to prefer one over the other.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like check_data_freshness or about. No mention of prerequisites or filtering capabilities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_crop_threatsCInspect
Find all pests, diseases, and weeds affecting a specific crop.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | Yes | Crop name (e.g. wheat, barley, oilseed rape) | |
| growth_stage | No | Filter by growth stage (e.g. tillering, stem extension) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify return format (list vs. details), data freshness, completeness of the threat database, or error handling when no threats are found for a crop.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded with the action verb and contains no redundant words. However, given the lack of annotations and presence of similar sibling tools, it may be overly terse rather than appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with three parameters, no annotations, no output schema, and similar sibling tools ('search_pests'), the description is incomplete. It lacks output specification, differentiation from related tools, and behavioral context that would help an agent invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, documenting all three parameters (crop, growth_stage, jurisdiction) with examples. The description mentions 'specific crop' which aligns with the required parameter, but adds no additional semantics about the optional jurisdiction default (GB) or growth stage filtering beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Find') and resources ('pests, diseases, and weeds') scoped to a 'specific crop'. However, it does not differentiate from the sibling tool 'search_pests', which likely searches by pest characteristics rather than crop association.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like 'search_pests' (which searches by pest name/attributes) or 'identify_from_symptoms' (which identifies threats by symptoms). No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_pestsBInspect
Search pests, diseases, and weeds by name or description. Use for broad queries about crop threats.
| Name | Required | Description | Default |
|---|---|---|---|
| crop | No | Filter results mentioning this crop | |
| limit | No | Max results (default: 20, max: 50) | |
| query | Yes | Free-text search query | |
| pest_type | No | Filter by type: disease, weed, or pest | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to describe what the search returns (full records vs. IDs vs. summaries), whether results can be fed into get_pest_details, pagination behavior, or fuzzy vs. exact matching behavior. 'Search' implies read-only, but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero redundancy. The first sentence states functionality immediately; the second provides usage context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 well-documented parameters but no output schema and no annotations, the description is minimally adequate. It omits critical context about return values (e.g., whether results include IDs needed for get_pest_details) and doesn't clarify the relationship to the similarly-named search_crop_threats sibling, which could confuse agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds minimal semantic value beyond the schema, though it clarifies that the 'query' parameter covers 'name or description' and implicitly maps to the pest_type filter by listing all three types (pests, diseases, weeds).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches 'pests, diseases, and weeds by name or description' (specific verb + resource). The phrase 'broad queries' implicitly distinguishes it from sibling get_pest_details (likely for specific records) and identify_from_symptoms (symptom-based), though it doesn't explicitly name these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides basic usage context ('Use for broad queries about crop threats'), implying when the tool is appropriate. However, it lacks explicit guidance on when NOT to use it (e.g., when a specific pest ID is already known) or explicit references to sibling alternatives like search_crop_threats, which appears functionally similar.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.