Server Details
UK farm health and safety — HSE guidance, machinery, COSHH, children on farms
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Ansvar-Systems/uk-farm-safety-mcp
- GitHub Stars
- 0
See and control every tool call
Tool Definition Quality
Average 3.6/5 across 10 of 10 tools scored.
Each tool targets a distinct farm safety domain or utility function with no overlapping scope. The get_* tools cover discrete regulatory areas (COSHH, CHAW, RIDDOR) and hazard categories (machinery, livestock), while utility tools handle search, metadata, and data provenance without ambiguity.
Tools follow a consistent verb_noun snake_case pattern (get_coshh_requirements, search_safety_guidance) with clear action descriptors. The only deviation is 'about', which breaks the pattern but follows common convention for server metadata endpoints.
Ten tools strike an appropriate balance for a specialized UK farm safety domain, covering major hazard categories (machinery, livestock, chemicals), specific regulations (CHAW, COSHH, RIDDOR), and necessary utility functions (search, sources, freshness). The scope is focused without being fragmented or overwhelming.
The toolset covers high-risk farm activities and key regulatory frameworks but omits specific guidance for other major UK farm hazards like working at height, slurry gas safety, or electrical safety. However, the search_safety_guidance tool likely provides access to this content, and the risk assessment template offers a generic framework applicable to all activities.
Available Tools
10 toolsaboutAInspect
Get server metadata: name, version, coverage, data sources, and links.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively specifies the return payload structure by listing metadata fields, but omits explicit statements about safety, idempotency, side effects, or rate limiting that would be expected for complete transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, information-dense sentence that front-loads the action and precisely enumerates return values without redundancy. Every word contributes essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter metadata tool without an output schema, the description is sufficiently complete. It compensates for the missing output schema by detailing the expected fields in the response, though it could be strengthened with a brief note about the operation being safe/read-only.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, triggering the baseline score of 4 per evaluation rules. No parameter semantic clarification is required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('server metadata'), and enumerates exact fields returned (name, version, coverage, data sources, links). It clearly distinguishes from siblings, which are all domain-specific safety/compliance tools rather than introspection utilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage context (retrieving server information), it lacks explicit guidance on when to call this versus other tools, or prerequisites for invocation. However, given the tool's obvious introspective nature and lack of overlapping functionality with siblings, the usage pattern is reasonably inferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_data_freshnessAInspect
Check when data was last ingested, staleness status, and how to trigger a refresh.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adequately discloses what information is returned (last ingestion time, staleness status, refresh trigger instructions), but fails to declare safety properties (read-only nature), side effects, or performance characteristics that would normally be covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficiently structured with three parallel information targets (last ingested, staleness, refresh trigger). Every clause earns its place by describing distinct return value categories without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately compensates by enumerating the three specific information categories returned. For a zero-parameter diagnostic tool, this level of detail is sufficient, though explicit mention of the return structure format would improve completeness further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. Per the rubric, zero-parameter tools receive a baseline score of 4. The description does not need to compensate for missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Check') and clearly identifies the resource (data ingestion timestamps, staleness status, refresh methods). It effectively distinguishes this diagnostic tool from content-retrieval siblings like 'get_coshh_requirements' or 'search_safety_guidance' by focusing on metadata rather than content.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (checking freshness before relying on data), but lacks explicit guidance on when to invoke this tool versus others, or prerequisites for interpretation. It does not mention whether this should be called before 'list_sources' or after discovering data discrepancies.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_children_on_farms_rulesAInspect
Get rules about children working on or visiting farms. Age-group restrictions, permitted activities, supervision requirements under CHAW Regulations.
| Name | Required | Description | Default |
|---|---|---|---|
| activity | No | Activity (e.g. tractor, livestock, machinery) | |
| age_group | No | Age group (e.g. under-13, 13-15, 16-17) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While it identifies the legal framework (CHAW Regulations), it omits operational details: whether results are cached, what happens when optional parameters are omitted (likely returns all rules), or if this requires specific permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes domain; second lists specific rule categories. Information density is high with no filler words or redundant explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and moderate complexity, the description adequately compensates for missing output schema by specifying return content types (restrictions, requirements). Would benefit from noting that all parameters are optional for filtering behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds value by mapping parameters to domain concepts ('Age-group restrictions'→age_group, 'permitted activities'→activity) and providing critical legal context ('under CHAW Regulations') that explains the jurisdiction default of GB.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get rules about children working on or visiting farms' provides clear verb and resource. The mention of 'CHAW Regulations' and child-specific restrictions clearly distinguishes this from siblings like get_machinery_safety or get_livestock_handling_safety.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through scope specification (children, CHAW Regulations) but lacks explicit when-to-use guidance versus related tools like get_machinery_safety or search_safety_guidance. No mention of prerequisite conditions or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_coshh_requirementsBInspect
Get COSHH (Control of Substances Hazardous to Health) requirements for agricultural substances: pesticides, sheep dip, fuel, grain dust, ammonia.
| Name | Required | Description | Default |
|---|---|---|---|
| activity | No | Activity involving the substance (e.g. spraying, dipping, storage) | |
| substance | No | Substance type (e.g. pesticide, sheep dip, diesel, grain dust) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden for behavioral disclosure. It fails to indicate whether this is read-only (implied by 'Get' but not stated), what format the requirements are returned in, error conditions, or data freshness considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently packs the action verb, regulatory framework acronym (with expansion), domain restriction (agricultural), and concrete examples. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with complete schema coverage, the description is adequate but misses that all parameters are optional (required: 0) and provides no hint about the output structure since no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description provides examples of substances (pesticides, sheep dip, ammonia) that overlap with schema examples, adding minimal semantic value beyond the structured definitions of 'activity', 'substance', and 'jurisdiction'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves COSHH requirements and specifies the agricultural domain (pesticides, sheep dip, fuel, etc.). However, it doesn't explicitly differentiate from the sibling 'search_safety_guidance' which might also return chemical safety information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'search_safety_guidance' or 'get_risk_assessment_template'. The description implies usage through the COSHH specificity but lacks explicit when/when-not direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_livestock_handling_safetyAInspect
Get safety guidance for handling livestock: cattle, sheep, pigs, horses. Includes hazards, control measures, and facility requirements.
| Name | Required | Description | Default |
|---|---|---|---|
| species | Yes | Animal species (e.g. cattle, sheep, pigs, horses) | |
| activity | No | Specific activity filter (e.g. calving, dipping, handling) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively compensates by describing the output structure ('Includes hazards, control measures, and facility requirements'), which is crucial given the absence of an output schema. It does not mention rate limits or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes scope and target species; second sentence details output content. Information is front-loaded and every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 3-parameter schema with no output schema and no annotations, the description provides adequate context by disclosing what content is returned. It appropriately omits parameter syntax details (covered by schema) but could strengthen completeness by explicitly contrasting with the general search_safety_guidance tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description reinforces the species parameter by listing examples in the text, but does not add semantic meaning beyond what the schema already provides (e.g., no clarification on activity syntax or jurisdiction default behavior).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with a clear resource ('safety guidance for handling livestock') and explicitly lists covered species (cattle, sheep, pigs, horses). It distinguishes from siblings like get_machinery_safety and get_coshh_requirements by domain (livestock handling) and scope (species-specific guidance).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through specificity (livestock focus) but lacks explicit guidance on when to use this versus search_safety_guidance or get_risk_assessment_template. No 'when-not' exclusions or prerequisite conditions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_machinery_safetyAInspect
Get safety guidance for farm machinery: hazards, control measures, PPE, and legal requirements. Covers tractors, ATVs, chainsaws, combines, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| activity | No | Specific activity filter (e.g. pto, rollover, maintenance) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| machine_type | Yes | Type of machinery (e.g. tractor, atv, chainsaw, combine) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It discloses what information is returned (hazards, control measures, PPE, legal requirements) but fails to explicitly state behavioral traits like read-only status, idempotency, or jurisdiction default behavior (though GB default is in schema).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads purpose and return value structure; second sentence provides concrete scope examples. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool without annotations or output schema, the description adequately compensates by detailing the four categories of safety guidance returned (hazards, controls, PPE, legal). It sufficiently covers the tool's scope, though explicitly noting the required machine_type parameter would strengthen completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description reinforces machine_type semantics by listing examples (tractors, ATVs, chainsaws, combines) matching the schema, but adds no additional context for 'activity' or 'jurisdiction' parameters beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Get' with clear resource 'safety guidance for farm machinery' and enumerates return content (hazards, control measures, PPE, legal requirements). It effectively distinguishes from siblings like get_livestock_handling_safety and get_children_on_farms_rules by specifying machinery focus and listing examples (tractors, ATVs, chainsaws).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The machinery focus provides implied usage context, but there is no explicit guidance on when to use this versus the general search_safety_guidance tool or whether to use get_coshh_requirements for chemical-related machinery hazards. No prerequisites or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_reporting_requirementsAInspect
Get RIDDOR incident reporting requirements: what to report, deadlines, notification methods, and record retention.
| Name | Required | Description | Default |
|---|---|---|---|
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) | |
| incident_type | Yes | Incident type (e.g. fatal, specified injury, dangerous occurrence, occupational disease) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the content scope of returned data (the four information categories listed), but lacks operational details such as data source, update frequency, or whether this references live regulations versus cached guidance.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence, front-loaded with the action 'Get RIDDOR incident reporting requirements' followed by a colon-separated list of return content. Zero redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 2-parameter schema with 100% coverage and no output schema, the description adequately covers return value content by listing the four information categories provided. It could be improved by explicitly noting UK jurisdiction (though implied by RIDDOR and the GB default) or referencing data freshness given the existence of check_data_freshness as a sibling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with incident_type and jurisdiction fully documented with examples and format specifications. The description mentions 'RIDDOR' (implying UK jurisdiction) but adds no further parameter semantics beyond the schema. Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'RIDDOR incident reporting requirements' with specific content categories (what to report, deadlines, notification methods, record retention). The RIDDOR specificity effectively distinguishes it from general safety guidance siblings like search_safety_guidance or get_coshh_requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the RIDDOR domain specificity, but provides no explicit when-to-use guidance or alternatives. It does not clarify when to use this versus the general search_safety_guidance sibling, though the domain-specific naming helps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_risk_assessment_templateBInspect
Get a risk assessment template for a farm activity: hazards, controls, residual risk, and review frequency.
| Name | Required | Description | Default |
|---|---|---|---|
| activity | Yes | Farm activity (e.g. tractor operation, cattle handling, chainsaw use, pesticide application) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It partially compensates by listing the expected template fields (hazards, controls, etc.), hinting at the output structure. However, it lacks critical behavioral context: it doesn't state whether the operation is read-only/safe, what happens if the activity is not found, or how the jurisdiction parameter affects the template content.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficiently structured sentence with zero waste. It front-loads the core action ('Get a risk assessment template') and uses the colon-separated list to append valuable content metadata without verbosity. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 simple parameters, no nested objects) and absence of output schema or annotations, the description is minimally viable. It compensates for the missing output schema by listing the template components, but fails to address error scenarios, jurisdiction-specific behavior, or the relationship between the input activity and output template structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the activity parameter including helpful examples and jurisdiction specifying the ISO format and default value. The description mentions 'farm activity' which aligns with the required parameter, but adds no additional semantic guidance (e.g., that jurisdiction changes legal requirements) beyond what the schema already provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Get) and resource (risk assessment template) with scope (farm activity). The colon-separated list of template components (hazards, controls, residual risk, review frequency) adds specific detail about what the tool returns. It implicitly distinguishes from siblings like get_coshh_requirements or get_machinery_safety by focusing on general risk assessment templates rather than specific safety domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the other safety-related siblings (e.g., when to use this generic template versus get_livestock_handling_safety or search_safety_guidance). There are no prerequisites, conditions, or explicit alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sourcesAInspect
List all data sources with authority, URL, license, and freshness info.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses what data fields are returned (authority, URL, license, freshness), hinting at the data structure. However, it omits whether this requires authentication, if results are paginated, cached, or the expected response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, efficiently structured sentence that is front-loaded with the action and specifies the payload details without redundancy. Every clause adds value: the verb ('List'), scope ('all data sources'), and specific attributes included in the response.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (zero parameters, no output schema), the description adequately explains what the tool retrieves by enumerating the metadata fields. However, it could improve by clarifying the domain context (agricultural safety regulations) implied by the sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema description coverage. Per evaluation rules, zero-parameter tools have a baseline score of 4. The description appropriately focuses on return value semantics rather than non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List) and resource (data sources), and specifies the metadata fields returned (authority, URL, license, freshness). It implicitly distinguishes from content-retrieval siblings (e.g., get_coshh_requirements, search_safety_guidance) by focusing on source metadata rather than safety content, but lacks explicit contrast with check_data_freshness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives like check_data_freshness, or prerequisites for invocation. The description assumes the agent knows when listing sources is appropriate without contextual cues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_safety_guidanceBInspect
Full-text search across all farm safety guidance including machinery, livestock, COSHH, and risk assessments.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 10, max: 50) | |
| query | Yes | Free-text search query (e.g. "tractor rollover", "slurry gas") | |
| topic | No | Filter by topic (e.g. machinery, livestock, coshh) | |
| jurisdiction | No | ISO 3166-1 alpha-2 code (default: GB) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention critical operational details: whether results are relevance-ranked, what data fields are returned, pagination behavior, rate limits, or authorization requirements. It only discloses the content scope (which topics are included).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence with zero redundancy. It front-loads the action ('Full-text search'), specifies the resource, and parenthetically lists covered domains. Every word earns its place; no restructuring would improve clarity without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no output schema and no annotations, the description adequately covers the domain scope but leaves significant gaps. It should clarify the return format (titles? snippets? full documents?) and explicitly differentiate from the specific 'get_' siblings. It meets minimum viability but lacks the behavioral context needed for an annotation-less tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with clear examples (e.g., 'tractor rollover', 'slurry gas', 'ISO 3166-1 alpha-2 code'). Since the schema fully documents all four parameters including the optional ones, the description baseline is 3; the description text itself adds no parameter-specific guidance, but none is needed given the schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'full-text search' across 'farm safety guidance' and enumerates the specific domains covered (machinery, livestock, COSHH, risk assessments). It effectively positions this as the comprehensive search tool versus specific retrieval siblings, though it doesn't explicitly state 'use this when you need to search across all topics'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the scope ('across all') implies this is for broad queries rather than the specific topic siblings (get_coshh_requirements, get_machinery_safety, etc.), there is no explicit when-to-use guidance, when-not-to-use, or mention of alternatives. The agent must infer the distinction from the word 'search' versus 'get' in sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.