Skip to main content
Glama

Server Details

UK veterinary medicines — VMD database, withdrawal periods, cascade rules

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ansvar-Systems/uk-vet-medicines-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Tools are largely distinct, though search_authorised_medicines and search_by_active_substance serve similar discovery purposes with different query patterns. get_medicine_details and get_withdrawal_period overlap slightly (both provide withdrawal data), but the descriptions clarify that one is comprehensive while the other is a critical food-safety verification tool.

Naming Consistency4/5

Follows a consistent verb_noun pattern (check_, get_, search_, list_) with the exception of 'about' which breaks convention. Minor variation between 'search_authorised_medicines' and 'search_by_active_substance' in structure, but overall readable and predictable.

Tool Count5/5

Ten tools appropriately covers the UK veterinary medicines domain without bloat. Good balance between discovery (2 tools), specific retrieval (3 tools), compliance guidance (2 tools), and metadata/administration (3 tools).

Completeness4/5

Strong coverage of core regulatory workflows: medicine search/discovery, withdrawal periods, banned substances, prescribing cascade, and record-keeping obligations. Minor gaps might include drug interaction checking or species-specific authorization queries, but the surface supports the primary compliance use cases effectively.

Available Tools

10 tools
aboutAInspect

Get server metadata: name, version, coverage, data sources, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses what data is returned (name, version, etc.), but omits safety profile, caching behavior, or rate limits typical for metadata endpoints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 9 words with front-loaded action ('Get server metadata') followed by colon-separated specifics. Zero waste, highly scannable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates by enumerating returned fields (name, version, coverage, data sources, links). Sufficient for a simple metadata tool, though structure/format remains unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, establishing a baseline of 4 per evaluation rules. The description appropriately requires no parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('server metadata'), and clearly distinguishes from medicine-specific siblings by listing server-centric fields (name, version, coverage, data sources, links).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Context is clear that this returns server information versus medicine data (unlike siblings), but lacks explicit 'when to use' guidance or mention of alternatives like list_sources for specific data source queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_cascade_rulesAInspect

Get the veterinary prescribing cascade steps with default withdrawal periods and documentation requirements.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesYesTarget species
conditionYesClinical condition being treated
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates what data is returned (cascade steps, default withdrawal periods, documentation requirements) but omits operational details like data freshness, authorization requirements, or whether results are jurisdiction-specific beyond the parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is information-dense with zero waste. It front-loads the core resource (cascade steps) and efficiently bundles the two key data categories (withdrawal periods, documentation requirements) without filler words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and moderate complexity (3 parameters), the description adequately covers the tool's purpose. However, without an output schema or annotations, it should ideally disclose behavioral traits like data authority or update frequency to achieve higher completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all three parameters (species, condition, jurisdiction) well-documented in the input schema. The description implies the subject matter (species, condition) through context but does not need to repeat parameter details already covered by the schema, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific domain terminology ('veterinary prescribing cascade steps') combined with clear outputs ('withdrawal periods', 'documentation requirements'). It effectively distinguishes from siblings like get_withdrawal_period (which lacks cascade context) and search_authorised_medicines (which finds approved drugs, not cascade rules).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the term 'cascade' implies use when no authorized veterinary medicine exists, the description lacks explicit when-to-use guidance contrasting it with search_authorised_medicines. No prerequisites or exclusions are stated, though the domain-specific terminology provides implied context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_data_freshnessAInspect

Check when data was last ingested, staleness status, and how to trigger a refresh.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It lists returned information (ingestion time, staleness, refresh info) but fails to clarify critical behavioral ambiguity: does 'how to trigger a refresh' mean the tool performs the refresh or merely returns instructions? Also omits side effects, idempotency, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with the action verb, followed by three distinct information domains (ingestion timestamp, staleness state, refresh mechanism). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool without output schema, the description adequately conveys what information is returned (timestamps, status, refresh guidance). Minor deduction for ambiguity regarding whether the tool triggers refreshes or only describes the process.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. With no parameters to describe, the baseline score applies. The description does not need to compensate for missing schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Check' paired with clear resources (data ingestion time, staleness status, refresh methods). Effectively distinguishes from medicine-focused siblings (get_medicine_details, search_authorised_medicines, etc.) by identifying this as a metadata/operational health tool rather than a domain data query.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through 'staleness status' and refresh triggering, suggesting use when data currency is questioned. However, lacks explicit 'when to use vs alternatives' guidance or prerequisites (e.g., 'use before querying if data seems outdated').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_banned_substancesBInspect

List substances prohibited for use in food-producing animals. Use of banned substances is a criminal offence.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesNoFilter by species
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
production_typeNoFilter by production type (e.g. food-producing, growth promotion)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adds critical behavioral context that 'Use of banned substances is a criminal offence,' alerting the agent to legal implications. However, it lacks other behavioral traits like read-only status, pagination, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with two sentences. The first establishes purpose; the second provides essential legal warning. Every sentence earns its place with no redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 3-parameter list operation with no nested objects. The criminal warning compensates somewhat for lack of annotations. However, with no output schema provided, the description could have described the return format (e.g., list of substance names/IDs) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all three parameters have descriptions including ISO format for jurisdiction). The description adds no parameter-specific guidance, which is acceptable given the high schema coverage, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool 'List substances prohibited for use in food-producing animals' with specific verb (List), resource (substances), and scope (prohibited/banned for food-producing animals). Implicitly distinguishes from sibling 'search_authorised_medicines' via 'prohibited' vs 'authorised' terminology, though it does not explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus siblings like 'search_authorised_medicines' or 'get_medicine_details'. The second sentence warns about criminal offences, which is legal context rather than usage guidance for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_medicine_detailsAInspect

Get full product details for a specific medicine by ID, including all withdrawal periods across species.

ParametersJSON Schema
NameRequiredDescriptionDefault
medicine_idYesMedicine ID (use search_authorised_medicines to find IDs)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It adds valuable behavioral context by specifying that withdrawal periods across species are included in the response. However, it lacks information on error handling (invalid ID scenarios), authentication requirements, rate limits, or whether the operation is read-only (implied by 'Get' but not stated).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 15 words. Front-loaded with action verb ('Get') and resource ('product details'). Every word earns its place: 'full' clarifies comprehensiveness, 'by ID' specifies lookup method, 'including all withdrawal periods across species' differentiates from partial data tools. Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 100% schema coverage and only 2 parameters, the description adequately explains what the tool retrieves. It appropriately focuses on the data content (withdrawal periods) rather than parameter mechanics. Minor gap: no mention of error cases (invalid ID) or explicit distinction from get_withdrawal_period sibling, which would be helpful given the similar functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with medicine_id and jurisdiction fully documented in the schema (including the helpful cross-reference to search_authorised_medicines). The description adds no specific parameter semantics beyond what the schema provides, which is appropriate given the high schema coverage. Baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'product details for a specific medicine by ID' and clarifies scope with 'including all withdrawal periods across species'. Distinguishes from sibling search tools (search_authorised_medicines) by specifying ID-based retrieval vs search, and hints at distinction from get_withdrawal_period by noting it returns withdrawal periods as part of full details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance in description text. The input schema contains a cross-reference to search_authorised_medicines for finding IDs, but the description itself does not explain the workflow or contrast with get_withdrawal_period sibling. Usage is implied by the name and 'by ID' phrasing but not explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_medicine_record_requirementsCInspect

Get medicine record-keeping obligations for food-producing animal holdings.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesNoFilter by species
holding_typeNoFilter by holding type (e.g. farm, smallholding)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to specify the return format (structured data vs. regulatory text), whether the operation is idempotent, or that jurisdiction defaults to GB when omitted. It only implies a read operation through the verb 'Get' without confirming safety or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single-sentence description is efficiently structured with the verb front-loaded and zero redundant words. However, extreme brevity comes at the cost of omitting behavioral context that would assist agent decision-making, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description should explain what the tool returns (e.g., legal text, retention periods, specific data fields required) and clarify that all parameters are optional with jurisdiction defaulting to GB. The current 9-word description is insufficient for a regulatory compliance tool with three filtering parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline score. The description adds no parameter-specific context (e.g., explaining that holding_type refers to the scale of operation, or that jurisdiction uses ISO codes), but the schema adequately documents the three optional filters without requiring additional description support.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and clearly identifies the resource as 'medicine record-keeping obligations' for 'food-producing animal holdings.' It sufficiently distinguishes from siblings like get_medicine_details (which returns medicine specifications) by focusing on regulatory documentation requirements rather than substance properties.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as get_medicine_details or get_banned_substances. It omits context about whether this is for compliance checking, pre-treatment verification, or audit preparation, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_withdrawal_periodAInspect

Get the withdrawal period for a specific medicine and species. CRITICAL for food safety — always verify against the actual SPC.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesYesTarget species (e.g. cattle, sheep, pigs)
medicine_idYesMedicine ID
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
product_typeNoProduct type (e.g. meat, milk)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. The SPC verification warning implies data may be cached or advisory rather than authoritative, which is valuable behavioral context. However, lacks details on return format, error handling, or data freshness guarantees expected for safety-critical tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second delivers critical safety context. Front-loaded and appropriately sized for the tool's complexity. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 4-parameter retrieval tool: covers purpose, safety criticality, and verification requirements. Gaps remain regarding return value description (absent output schema), error cases (e.g., medicine not found), and the significance of optional parameters like product_type.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline of 3. Description reinforces the two required parameters (medicine and species) in the prose but does not add semantic nuance beyond the schema, such as explaining that jurisdiction defaults to GB or how product_type affects results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Get' with resource 'withdrawal period' and scope 'for a specific medicine and species'. Clearly distinguishes from sibling tools like get_medicine_details (general info) or search_authorised_medicines (broad search) by focusing specifically on withdrawal periods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides critical context 'CRITICAL for food safety' indicating high-stakes usage scenarios, and mandates verification workflow 'always verify against the actual SPC'. Lacks explicit comparison to alternatives like get_medicine_details for general drug information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesAInspect

List all data sources with authority, URL, license, and freshness info.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden of behavioral disclosure. It compensates adequately by specifying the return fields (authority, URL, license, freshness) since no output schema exists, but fails to mention safety characteristics (read-only status), rate limits, or performance characteristics that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, information-dense sentence with no filler. It efficiently packs the action, scope, and return value details into minimal space, with every clause earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters) and lack of output schema, the description appropriately compensates by detailing the returned data fields. It successfully communicates what the agent will receive back, which is the critical missing piece. A perfect score would require explicit safety/disposition notes given the absence of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which per evaluation rules establishes a baseline score of 4. The description correctly omits parameter discussion since none exist, and does not need to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (List), resource (data sources), and specific return fields (authority, URL, license, freshness). However, it does not explicitly differentiate from sibling tool 'check_data_freshness' which also deals with freshness data, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives like 'check_data_freshness' or 'about'. There are no 'when-to-use' or 'when-not-to-use' clauses, leaving the agent to infer appropriate usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_authorised_medicinesAInspect

Search VMD-authorised veterinary medicines. Full-text search across product names, active substances, and species. Use for broad queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query (product name, substance, or condition)
speciesNoFilter by species (e.g. cattle, sheep, pigs)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
active_substanceNoFilter by active substance (e.g. oxytetracycline)
pharmaceutical_formNoFilter by form (e.g. injection, intramammary)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully indicates the full-text search nature and searchable fields, but fails to mention safety properties (read-only status), rate limits, or what happens when no results match.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 defines the resource, sentence 2 explains the search mechanism, and sentence 3 provides usage guidance. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers the search mechanism for a 6-parameter tool with good schema documentation. However, without an output schema, the description should ideally describe what the search returns (e.g., medicine summaries, IDs) to enable effective chaining with siblings like get_medicine_details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds valuable semantic context by clarifying that the query parameter performs full-text search across 'product names, active substances, and species'—explaining the relationship between the general query field and the specific filter parameters (species, active_substance).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool searches 'VMD-authorised veterinary medicines' using 'full-text search across product names, active substances, and species.' This provides a specific verb, resource, and scope. The phrase 'Use for broad queries' differentiates its intended use case from sibling tools like search_by_active_substance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a positive usage hint ('Use for broad queries'), giving context for when to select this tool. However, it lacks explicit negative guidance (when not to use), prerequisites, or named alternatives despite the presence of sibling search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_active_substanceBInspect

Find all authorised products containing a specific active substance. Also checks if the substance is banned.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesNoFilter by authorised species
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
active_substanceYesActive substance name (e.g. oxytetracycline, meloxicam)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the secondary banned-substance check, which is valuable behavioral context. However, it lacks details on return format, pagination, case sensitivity, or what happens when no products are found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The primary function is front-loaded, and the secondary banned-check feature follows naturally. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter search tool with 100% schema coverage but no output schema, the description is minimally adequate. It could benefit from describing the return structure or clarifying that 'authorised products' refers to medicines, but the core functionality is covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'specific active substance' which aligns with the required parameter, but adds no additional semantic context for the optional 'species' or 'jurisdiction' filters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds authorised products by active substance and performs a banned substance check. However, it doesn't explicitly differentiate from sibling 'search_authorised_medicines' or clarify the domain (veterinary vs human medicines).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'search_authorised_medicines' (which searches by medicine name) or 'get_banned_substances'. No prerequisites or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.