Skip to main content
Glama

Server Details

UK livestock welfare, feed, health, and movement rules — 8 species, DEFRA codes

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ansvar-Systems/uk-livestock-mcp
GitHub Stars
0

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsB

Average 3.5/5 across 11 of 11 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct domains (breeding, feed, housing, movement), but search_animal_health and search_livestock_guidance overlap on health content, and both search tools overlap slightly with specific getters like get_welfare_standards. Descriptions clarify boundaries, but agents might hesitate between the two search tools.

Naming Consistency5/5

Excellent consistency throughout: all snake_case with clear verb_noun patterns (get_*, search_*, list_*, check_*). Metadata tools (about, check_data_freshness) follow the same convention as domain tools, creating a predictable interface.

Tool Count5/5

Eleven tools is ideal for this scope. The set covers reference data (breeding, feed, housing, welfare, movement, stocking), search capabilities, and metadata without bloat. Parameterizing by species rather than splitting into per-species tools keeps the count lean and appropriate.

Completeness4/5

Strong coverage of UK livestock management domains (DEFRA welfare codes, APHA movement rules, feeding standards). Minor gaps might include specific medicine withdrawal periods or environmental permitting, but the search tools provide fallback coverage. No dead ends for core livestock guidance workflows.

Available Tools

11 tools
aboutAInspect

Get server metadata: name, version, coverage, data sources, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses return content by listing specific metadata fields (name, version, coverage, data sources, links), compensating for the missing output schema. It implies read-only behavior via 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficiently structured sentence with colon-separated enumeration. Every element earns its place—verb, resource, and specific return fields are all front-loaded with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple metadata endpoint with no parameters, the description is complete. It compensates for the lack of output schema by explicitly listing all returned metadata fields, giving agents sufficient information to utilize the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, which warrants a baseline score of 4. The description appropriately does not invent parameters where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('server metadata'), and clearly distinguishes this tool from livestock-specific siblings by enumerating metadata fields (name, version, coverage, data sources, links) rather than animal data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through domain contrast (server metadata vs. livestock data), but lacks explicit guidance such as when to call this before using other tools or prerequisites. No alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_data_freshnessAInspect

Check when data was last ingested, staleness status, and how to trigger a refresh.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool returns temporal information (last ingested), status flags (staleness), and operational guidance (refresh triggers). It does not, however, clarify whether 'trigger a refresh' means it performs the refresh or merely returns instructions, nor does it describe side effects or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence efficiently packs three distinct information types (ingestion time, staleness, refresh method) without redundancy. It is appropriately front-loaded, though slightly ambiguous on whether it returns instructions or performs actions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless tool, the description adequately explains the tool's scope. However, given the lack of output schema and annotations, it should ideally clarify what format the freshness information takes (timestamp, boolean, structured object) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to the scoring rubric, 0 parameters establishes a baseline score of 4. The description appropriately does not fabricate parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Check') and resources ('data', 'ingestion', 'staleness') to clearly indicate this is a metadata/freshness monitoring tool. It implicitly distinguishes itself from the livestock guidance siblings (get_breeding_guidance, search_animal_health, etc.) by focusing on data status rather than domain content, though explicitly stating 'livestock guidance data' would make it a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it lacks explicit 'when to use' language, the mention of 'staleness status' and 'how to trigger a refresh' implies usage scenarios (when data may be outdated). However, it doesn't explicitly contrast with alternatives or state prerequisites for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_breeding_guidanceAInspect

Get breeding guidance for a species: gestation periods, breeding calendars, and management advice.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoFilter by topic (e.g. gestation, mating, lambing)
speciesYesSpecies ID or name (e.g. sheep, cattle, pigs)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses what information is returned (gestation periods, breeding calendars, management advice), adding context beyond the tool name. However, it omits details about data freshness, caching behavior, or error conditions that would help an agent handle the response appropriately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence of 11 words. It front-loads the core action ('Get breeding guidance for a species') and uses a colon-separated list to specify content types without verbosity. Every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, 100% schema coverage, no output schema), the description adequately compensates by enumerating the types of guidance returned. It appropriately omits parameter details already covered by the schema, though it could briefly note the jurisdiction-sensitive nature of breeding regulations given the GB default.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'species' explicitly, aligning with the required parameter, and the listed topics (gestation) hint at valid 'topic' filter values. However, it does not explain the 'jurisdiction' parameter's role or provide additional semantic context beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') with clear resource ('breeding guidance') and scope ('for a species'). It effectively distinguishes from siblings like get_feed_requirements, get_housing_requirements, and search_animal_health by specifying breeding-specific outputs (gestation periods, breeding calendars).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context through the breeding-specific content listed (gestation, calendars), allowing inference that this is for breeding management questions. However, it lacks explicit guidance on when to use this versus the broader search_livestock_guidance or prerequisites like requiring a valid species first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_feed_requirementsAInspect

Get feed and nutrition requirements for a species by age class and production stage.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesYesSpecies ID or name (e.g. sheep, cattle, pigs)
age_classNoAge class (e.g. adult, lamb, calf, grower)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
production_stageNoProduction stage (e.g. maintenance, lactation, finishing)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to mention whether this is a read-only query, what happens if an invalid species is provided, the data source freshness, or the structure of the returned requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the verb and resource. Every word earns its place with no redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple flat schema with four well-documented parameters and no output schema, the description adequately covers the tool's purpose. However, it lacks any indication of what the 'requirements' return value contains (nutrient percentages, feed volumes, etc.), which would help agents understand the tool's utility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions 'age class and production stage' which aligns with schema parameters, but adds no semantic context beyond what the schema already provides (e.g., no explanation of how jurisdiction affects nutritional standards).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('feed and nutrition requirements'), and distinguishes itself from siblings like get_breeding_guidance and get_housing_requirements by focusing specifically on nutritional needs rather than breeding, housing, or welfare.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the scope is clear enough to imply when to use it (when feed data is needed), the description provides no explicit guidance on when to use this versus the broader search_livestock_guidance or get_breeding_guidance, nor any prerequisites for the parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_housing_requirementsCInspect

Get housing requirements for a species: space per head, ventilation, flooring, temperature, lighting.

ParametersJSON Schema
NameRequiredDescriptionDefault
systemNoHousing system (e.g. indoor, outdoor)
speciesYesSpecies ID or name (e.g. sheep, cattle, pigs)
age_classNoAge class (e.g. adult, lamb, calf)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. 'Get' implies read-only but does not confirm idempotency, error handling (what if species unknown?), caching behavior, or that jurisdiction defaults to GB (only visible in schema).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently communicates intent and return value structure. The colon-separated list of housing factors is scannable. Slightly front-loaded; could mention 'lookup' or 'retrieve' nature explicitly but no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Acceptable for a 4-parameter lookup tool without output schema. Describes what data is returned, but given lack of annotations and output schema, should mention return format (structured vs text) or default jurisdiction behavior explicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline applies. Description focuses on output fields rather than inputs, but with full schema coverage for species, system, age_class, and jurisdiction, no additional parameter explanation is required in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource combo ('Get housing requirements for a species') with specific output fields enumerated (space per head, ventilation, flooring, temperature, lighting). Distinguishes from get_feed_requirements but does not clarify boundaries with siblings get_welfare_standards or get_stocking_density which conceptually overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance or alternatives mentioned. Given siblings like get_welfare_standards and get_stocking_density cover overlapping domains (space per head vs stocking density), the description should clarify when to prefer this tool over those.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_movement_rulesBInspect

Get livestock movement rules including standstill periods, exceptions, and APHA regulation references. Critical for disease control compliance.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesYesSpecies ID or name (e.g. sheep, cattle, pigs)
rule_typeNoFilter by rule type (e.g. standstill, reporting, identification)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but fails to specify return format (structured data vs text), authentication requirements, rate limits, or error handling behavior. It only describes the semantic content of the data retrieved, not operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with core action front-loaded. The first sentence clearly defines scope; the second provides domain context. Minor deduction because 'Critical for disease control compliance' is slightly generic compared to the specific first sentence, but overall well-structured with no redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 3-parameter retrieval tool with complete schema documentation. However, given the lack of output schema and annotations, the description could have improved completeness by describing the return structure (e.g., whether it returns rule text, IDs, or structured exceptions) or noting data freshness considerations relevant to regulatory compliance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'standstill periods' (correlating to rule_type parameter) and 'APHA regulation references' (hinting at UK/GB jurisdiction context), but does not add syntax details, parameter interactions, or format guidance beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'livestock movement rules' and enumerates specific content types (standstill periods, exceptions, APHA references). It effectively distinguishes from siblings like get_feed_requirements or get_breeding_guidance by focusing on regulatory movement compliance rather than husbandry aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Critical for disease control compliance' provides implied usage context (use when handling disease control/compliance scenarios), but lacks explicit when-not guidance or named alternatives. It does not clarify when to use this versus search_livestock_guidance or search_animal_health for broader queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stocking_densityBInspect

Get stocking density requirements for a species by age class and housing type.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesYesSpecies ID or name (e.g. sheep, cattle, pigs)
age_classNoAge class (e.g. adult, lamb, calf, piglet)
housing_typeNoHousing type (e.g. indoor, outdoor)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden of behavioral disclosure. While 'Get' implies a read-only operation, the description does not clarify error handling (e.g., invalid species), rate limits, or what format the stocking density requirements take (numeric, textual, regulatory).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the verb and resource. No redundant or filler words; every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a lookup tool with 4 parameters and 100% schema coverage, the description is minimally adequate. However, lacking an output schema or any description of return values, error states, or default behaviors (like the GB jurisdiction default mentioned only in schema), there are meaningful gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured data already explains all parameters thoroughly. The description mentions 'age class and housing type' which maps to parameters, but adds no semantic value beyond what the schema provides (baseline 3).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'stocking density requirements' (specific resource) for a species using age class and housing type as filters. However, it does not explicitly differentiate from siblings like get_housing_requirements or get_welfare_standards, which could overlap conceptually.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_housing_requirements or get_welfare_standards, nor does it mention prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_welfare_standardsAInspect

Get welfare standards for a species. Returns both legal minimum requirements and best practice recommendations from DEFRA welfare codes.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesYesSpecies ID or name (e.g. sheep, cattle, pigs)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
production_systemNoFilter by production system (e.g. indoor, outdoor, free-range)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable context about data source (DEFRA welfare codes) and return structure (legal minimums + best practices). However, lacks operational details like error handling for invalid species, caching behavior, or geographic limitations despite DEFRA being UK-specific.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence establishes action and resource; second sentence clarifies return content and authoritative source. Information density is optimal with no redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and no output schema, the description adequately compensates by describing return content (legal requirements + recommendations). Could improve by noting UK-specific scope (DEFRA reference + GB default) explicitly for agents unfamiliar with the acronym.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 3 parameters documented), establishing baseline 3. Description adds no explicit parameter guidance (e.g., valid examples for production_system), but schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + resource ('welfare standards') + scope ('for a species'). Explicitly distinguishes from siblings like get_housing_requirements or get_feed_requirements by specifying it covers 'legal minimum requirements and best practice recommendations' from DEFRA codes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through content description (legal minimums vs best practices) but provides no explicit when-to-use guidance versus siblings like get_breeding_guidance or get_stocking_density. No prerequisites or exclusions stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesBInspect

List all data sources with authority, URL, license, and freshness info.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It compensates partially by listing return fields (authority, URL, license, freshness) not defined in an output schema. However, it omits safety indicators (read-only vs. cached vs. live call) and behavioral constraints (pagination, rate limits).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence of nine words. Information is front-loaded with the action ('List all data sources') followed by specific attributes, containing zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool but missing safety context (read-only assurance) and usage relationships (e.g., connection to check_data_freshness sibling). Given no output schema exists, mentioning return fields helps, but completion would benefit from scope limits or cache behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters with 100% schema coverage (empty object). Baseline 4 applies as there are no parameters requiring semantic clarification in the description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('List') and resource ('data sources') with specific return fields (authority, URL, license, freshness). Implicitly distinguishes from content-retrieval siblings (get_* guidance tools) by focusing on metadata discovery, though it doesn't explicitly state this is a discovery/catalog tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus siblings or prerequisites. While the sibling tools suggest this is for metadata discovery, the description lacks 'when to use' or 'see also check_data_freshness' pointers that would help an agent navigate the tool set.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_animal_healthAInspect

Search animal health conditions, diseases, symptoms, and treatments. Notifiable diseases are flagged.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch term (condition name, symptom, or cause)
speciesNoFilter by species (e.g. sheep, cattle, pigs)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It adds value by noting 'Notifiable diseases are flagged' (result formatting behavior), but omits safety profile (read-only vs destructive), rate limits, authentication requirements, or return structure details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence establishes core functionality; second sentence adds critical domain-specific behavior (notifiable disease flagging). Front-loaded with action verb and appropriately scoped.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter search tool with complete schema documentation, but gaps remain due to missing output schema and zero annotations. The description hints at result formatting (notifiable disease flags) but should ideally clarify return structure, volume limits, or data freshness given the lack of structured output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed descriptions for query, species, and jurisdiction parameters. The description adds minimal semantic detail beyond the schema, relying on the structured documentation to carry parameter meaning. Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Search' with clear resource scope (animal health conditions, diseases, symptoms, treatments). It implicitly distinguishes from sibling 'search_livestock_guidance' by focusing specifically on health/disease/treatment content rather than general husbandry guidance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description establishes what the tool searches but provides no explicit guidance on when to use this versus 'search_livestock_guidance' or other siblings. No prerequisites, exclusions, or alternative workflows are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_livestock_guidanceAInspect

Search livestock welfare, feed, health, housing, and breeding guidance. Use for broad queries about livestock management.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query
speciesNoFilter by species (e.g. sheep, cattle, pigs)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. Beyond stating it performs a search, it reveals nothing about result format, pagination, data freshness, rate limits, or authorization requirements. The phrase 'broad queries' adds minimal behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two efficient sentences with zero redundancy. The first establishes scope and the second provides usage context, front-loading the most critical information without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter search tool with no output schema and no annotations, the description is minimally adequate. It covers the search scope and usage context, but lacks any indication of return values, result structure, or data source characteristics that would help the agent handle the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (all 4 parameters have descriptions), the baseline is 3. The description lists searchable domains (welfare, feed, etc.) which aligns with the 'query' parameter but adds no syntactic details, format examples, or clarifications beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Search') and clearly identifies the resource (livestock guidance across welfare, feed, health, housing, and breeding). It implicitly distinguishes from sibling 'get_' tools by listing multiple domains that those tools handle individually, though it could explicitly contrast with the 'search_animal_health' sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence ('Use for broad queries about livestock management') provides clear contextual guidance, implying this tool should be used for comprehensive searches rather than the specific 'get_' siblings for targeted retrieval. However, it stops short of explicitly naming those alternatives or stating exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.