Skip to main content
Glama

Server Details

UK food safety and traceability — direct sales, labelling, raw milk, HACCP

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Ansvar-Systems/uk-food-safety-mcp
GitHub Stars
0

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down
Tool DescriptionsA

Average 3.8/5 across 11 of 11 tools scored.

Server CoherenceA
Disambiguation4/5

Tools are largely distinct with clear scope boundaries. Minor potential confusion exists between the comprehensive 'get_product_requirements' and specific getters like 'get_hygiene_requirements' or 'get_labelling_requirements', though descriptions clarify that the former provides an overview while the latter offer deep-dives. The 'check_*' vs 'get_*' distinction is subtle but manageable.

Naming Consistency3/5

Mixed verb usage undermines consistency: 'check' is used for both status verification ('check_data_freshness') and rule retrieval ('check_direct_sales_rules', 'check_raw_milk_rules'), while similar retrieval operations use 'get'. The 'about' tool breaks the verb_noun pattern entirely. All tools use snake_case consistently.

Tool Count5/5

Eleven tools is well-suited for the domain, covering meta-operations (about, list_sources, check_data_freshness), search, and specific regulatory areas (hygiene, labelling, traceability, assurance schemes, direct sales, raw milk) without bloat. Each tool earns its place in the UK food safety compliance workflow.

Completeness4/5

Strong coverage of core food safety domains including HACCP, traceability, labelling, and sales channels. The inclusion of regional raw milk rules and farm assurance schemes addresses specific UK complexities. Minor gaps may exist for specific approved establishment categories (meat/egg plants) and import controls, though 'get_product_requirements' likely covers these generally.

Available Tools

11 tools
aboutAInspect

Get server metadata: name, version, coverage, data sources, and links.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It compensates by disclosing the specific metadata fields returned (name, version, coverage, data sources, links), effectively documenting the output structure. Does not mention auth, rate limits, or caching, but the return value disclosure is the critical behavioral trait for this metadata tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, perfectly front-loaded with action verb, followed by colon-separated list of specific return values. No redundant words or filler; every token earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter metadata tool without output schema, the description is complete. It compensates for missing output schema by enumerating the exact metadata fields returned, providing sufficient information for an agent to decide when to invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present (empty object schema). Per guidelines, 0 params = baseline 4. No parameter description needed or expected.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' + resource 'server metadata' with explicit enumeration of returned fields (name, version, coverage, data sources, links). Clearly distinguishes from operational siblings like check_raw_milk_rules or get_product_requirements by focusing on server introspection rather than regulatory content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance or prerequisites mentioned. While the content ('server metadata') implies this is for discovery/capability checking rather than business logic operations, it does not explicitly state to use this before other tools or when to prefer alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_data_freshnessAInspect

Check when data was last ingested, staleness status, and how to trigger a refresh.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies that the tool returns information about refresh triggers rather than performing the refresh itself, but omits details on caching, performance costs, or idempotency that would help an agent understand operational impact.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient 12-word sentence that is front-loaded with the action verb 'Check'. Every word earns its place with no redundancy or wasted phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool without annotations or output schema, the description adequately compensates by listing the three specific information items returned (ingestion time, staleness status, refresh method). Given the low complexity, this is sufficiently complete for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the baseline rules for parameter-less tools, this scores a 4. The description does not need to compensate for missing schema documentation since there are no parameters to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks data ingestion time, staleness status, and refresh triggers using specific verbs. It implicitly distinguishes itself from content-focused siblings (e.g., get_hygiene_requirements) by focusing on data metadata rather than food safety rules, though it could explicitly scope what 'data' refers to.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given siblings like list_sources might also provide metadata, explicit guidance on when freshness checking is appropriate (e.g., before querying stale regulations) is absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_direct_sales_rulesBInspect

Check rules for selling food directly from farm gate, farmers market, or online. Covers registration, exemptions, and volume thresholds.

ParametersJSON Schema
NameRequiredDescriptionDefault
volumeNoProduction volume description for exemption checks
productYesProduct ID or name (e.g. eggs, honey, baked-goods)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
sales_methodNoSales method: farm_gate, farmers_market, online
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While it mentions covered aspects (registration, exemptions, volume thresholds), it fails to specify what the tool returns (text? structured rules? compliance boolean?), whether it makes external API calls, or if caching is involved. For a regulatory lookup tool with no safety annotations, this is insufficient behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first establishes scope and venue, the second lists covered regulatory aspects. Front-loaded with the action verb 'Check' and appropriately sized for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with full schema coverage but no output schema or annotations, the description adequately covers the domain scope (registration/exemptions). However, it lacks critical context about return value structure or whether this queries a live database versus static rules, which is necessary for a regulatory compliance tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds valuable domain context by mapping 'farm gate, farmers market, or online' to the sales_method parameter and 'volume thresholds' to the volume parameter, helping the agent understand the semantic relationships between inputs and the regulatory domain.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Check[s] rules for selling food directly' with specific venues (farm gate, farmers market, online) and distinguishes from siblings like check_raw_milk_rules and get_product_requirements by its direct-sales focus. However, it doesn't explicitly contrast with the sibling check_raw_milk_rules to clarify when to use which.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the specific scope (direct sales from farm gate/market/online), but provides no explicit 'when to use' guidance or prerequisites. The agent must infer from the scope that this is distinct from check_raw_milk_rules or general product requirement tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_raw_milk_rulesAInspect

Check raw (unpasteurised) milk sale rules by UK devolved administration. CRITICAL: England=permitted (restricted), Scotland=BANNED, Wales=permitted (restricted), Northern Ireland=PROHIBITED.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoDevolved administration: England, Scotland, Wales, Northern Ireland
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
sales_methodNoIntended sales method (e.g. farm gate, farmers market, retail)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral burden and delivers substantial value via the CRITICAL section, disclosing the regional legal mappings (Scotland=BANNED, England=permitted, etc.). This prepares the agent for expected return values and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes purpose and scope, second delivers critical domain constraints. Perfectly front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates effectively by documenting the core domain logic (regional status mappings). It provides sufficient context for an agent to understand what information this tool accesses, though it does not explicitly state the return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'by UK devolved administration' which aligns with the country parameter, but does not add syntax details, validation rules, or semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with 'Check raw (unpasteurised) milk sale rules by UK devolved administration' — clear verb (Check), resource (raw milk rules), and scope (UK administrations). The specificity distinguishes it from the more general check_direct_sales_rules sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit comparison to siblings or when-to-use guidance provided. However, the description implies usage through extreme specificity ('raw unpasteurised milk'), making it clear this is for raw milk regulation queries specifically, not general food sales.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_assurance_scheme_requirementsAInspect

Get details on UK farm assurance schemes: Red Tractor, RSPCA Assured, Soil Association, SALSA, QMS. Standards, audit frequency, costs.

ParametersJSON Schema
NameRequiredDescriptionDefault
schemeNoScheme name or ID (e.g. red-tractor, salsa)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
product_typeNoFilter schemes by product type
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Compensates partially by listing return content (standards, audit frequency, costs) and default jurisdiction (GB), but omits error handling behavior, data freshness, permission requirements, and side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with action and domain, followed by return value specification. Every clause earns its place—first sentence establishes scope and examples, second details the information categories returned.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter lookup tool with no output schema. Description compensates by specifying what details are returned (standards, audit frequency, costs). Would benefit from noting that all parameters are optional, but schema coverage is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds value by enumerating specific scheme examples (Red Tractor, etc.) that map to the 'scheme' parameter values and reinforcing the UK scope which contextualizes the 'jurisdiction' parameter's default value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'UK farm assurance schemes' with concrete examples (Red Tractor, RSPCA Assured, Soil Association, SALSA, QMS). Clearly distinguishes from siblings like get_product_requirements or get_hygiene_requirements by focusing specifically on certification/audit schemes rather than general compliance rules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage domain through specific scheme enumeration and UK jurisdiction focus, but lacks explicit when-to-use guidance comparing it to get_product_requirements or check_direct_sales_rules. No mention of prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hygiene_requirementsAInspect

Get hygiene and HACCP requirements for a food business activity. Covers registration, temperature controls, cleaning, and staff training.

ParametersJSON Schema
NameRequiredDescriptionDefault
activityYesFood business activity (e.g. dairy processing, baking, butchery, market stall)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
premises_typeNoPremises type (e.g. farm shop, commercial kitchen, mobile)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable scope context by listing covered areas (registration, temperature controls, cleaning, staff training), but omits operational details like authentication requirements, rate limits, or return value format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences: the first establishes the core function, the second enumerates coverage areas. There is no redundant or filler text; every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 3-parameter flat schema with complete documentation, the description is appropriately scoped. It compensates somewhat for the missing output schema by listing content areas covered. A perfect score would require describing the return format or structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear examples (e.g., 'dairy processing', 'ISO 3166-1 alpha-2'). Since the schema comprehensively documents all three parameters, the baseline score is 3. The description mentions 'food business activity' aligning with the required parameter but adds no semantic information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and clearly identifies the resource ('hygiene and HACCP requirements') and target ('food business activity'). It effectively distinguishes from siblings like get_labelling_requirements and get_product_requirements by specifying the hygiene/HACCP domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage through its specific domain focus (hygiene vs. labelling/product requirements), it lacks explicit guidance on when to use this tool versus siblings like get_assurance_scheme_requirements or check_raw_milk_rules. No prerequisites or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_labelling_requirementsAInspect

Get mandatory labelling fields for a product. Returns both general pre-packed requirements and product-specific rules (e.g. egg stamps, honey origin).

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct name or type (e.g. eggs, honey, meat-beef)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It successfully discloses return structure ('general pre-packed requirements and product-specific rules') with concrete examples, but omits operational details like caching, rate limits, error conditions, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second discloses return value structure. Front-loaded with the critical action and appropriately scoped for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter tool with no output schema. Description compensates for missing output schema by describing return categories ('general pre-packed' vs 'product-specific'). Minor gap: could mention jurisdiction defaults behavior (though present in schema) or error scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description provides contextual examples ('egg stamps, honey origin') that map to the product parameter values, adding semantic meaning for how inputs relate to outputs. Does not add syntax details or jurisdiction semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') + resource ('mandatory labelling fields') + scope ('for a product'). Examples ('egg stamps, honey origin') clarify product-specific outputs. Distinguishes implicitly from siblings like get_hygiene_requirements by focusing specifically on labelling, though could explicitly differentiate from get_product_requirements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through scope specification ('mandatory labelling fields'), indicating this is for compliance/labelling lookups. However, lacks explicit guidance on when to use versus siblings like get_product_requirements or check_direct_sales_rules, and states no prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_product_requirementsAInspect

Get food safety requirements for a specific product by sales channel. Returns registration, approval, temperature, traceability, and labelling requirements.

ParametersJSON Schema
NameRequiredDescriptionDefault
productYesProduct ID or name (e.g. raw-milk, eggs, honey)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
sales_channelNoSales channel: farm_gate, farmers_market, retail, online
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully documents the return payload structure (listing the five requirement categories), which is valuable given the lack of output schema. However, it omits operational traits like safety (read-only), idempotency, or data freshness that would help the agent understand runtime behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of exactly two high-value sentences with zero waste. The first establishes the operation and primary filter; the second compensates for the missing output schema by enumerating return fields. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by listing the five requirement categories returned. With 100% schema coverage for inputs and clear sibling differentiation via return-type enumeration, the description provides sufficient context for invocation, though it could explicitly note its role as the comprehensive endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description references 'specific product' and 'sales channel', reinforcing the schema's intent, but adds no additional semantic context such as valid date formats, case sensitivity for product names, or examples for jurisdiction codes beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Get), resource (food safety requirements), and scoping mechanism (by sales channel). It distinguishes itself from single-purpose siblings like get_labelling_requirements by listing the multiple requirement types returned (registration, approval, temperature, traceability, labelling). However, it does not explicitly position itself as the comprehensive/aggregate option versus the specific requirement tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'by sales channel', suggesting it should be used when channel-specific requirements are needed. However, it lacks explicit guidance on when to use this comprehensive tool versus the specific siblings (get_labelling_requirements, get_traceability_rules, etc.) or prerequisites for the product parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_traceability_rulesAInspect

Get traceability requirements for a product type. Returns record-keeping obligations, retention periods, and one-step-back-one-step-forward rules.

ParametersJSON Schema
NameRequiredDescriptionDefault
speciesNoAnimal species if applicable (e.g. cattle, sheep, poultry)
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
product_typeYesProduct type (e.g. dairy, eggs, meat, honey)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It compensates by detailing what the tool returns: 'record-keeping obligations, retention periods, and one-step-back-one-step-forward rules', giving clear behavioral expectations about response content.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes purpose, second details return values. Front-loaded with action verb and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but the description compensates by listing the three categories of returned data. With 3 well-documented parameters and clear scope, the description provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description references 'product type' aligning with the required parameter, but does not add syntax guidance, examples, or parameter relationships beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with resource 'traceability requirements' and scope 'for a product type'. It clearly distinguishes from siblings like get_hygiene_requirements or get_labelling_requirements by specifying the traceability domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the specific naming ('traceability') implies usage distinct from sibling tools, there are no explicit when-to-use guidelines, prerequisites (e.g., needing to know product_type first), or mentions of alternative tools for broader compliance checks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sourcesAInspect

List all data sources with authority, URL, license, and freshness info.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It hints at behavioral aspects by mentioning 'freshness info' and 'all' (suggesting unfiltered, complete results), but lacks details on rate limits, caching, authentication requirements, or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence of 11 words with zero waste. It front-loads the action ('List') and efficiently enumerates the four key data attributes returned, earning its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters) and lack of output schema, the description adequately compensates by detailing what information is retrieved. It is complete enough for a discovery/metadata tool, though mentioning the return structure (array vs object) would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which sets a baseline of 4. The description correctly implies no filtering is needed (list all), consistent with the empty parameter schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (List), scope (all), resource (data sources), and specific metadata fields returned (authority, URL, license, freshness). However, it does not explicitly differentiate from sibling tools like check_data_freshness or search_food_safety.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. It does not mention prerequisites, nor does it clarify the relationship to check_data_freshness despite overlapping 'freshness' terminology.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_food_safetyAInspect

Full-text search across all food safety data: regulations, labelling rules, hygiene requirements, and product guidance.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
queryYesFree-text search query
jurisdictionNoISO 3166-1 alpha-2 code (default: GB)
product_typeNoFilter by product type (e.g. dairy, eggs, meat)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the 'full-text' search behavior (indicating keyword matching rather than exact lookup) and delineates the data scope. However, it lacks details on return format, pagination behavior beyond the limit parameter, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action ('Full-text search') and immediately qualifies the scope. There is no redundant or wasted text; every word serves to define the tool's function or data coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 simple parameters with complete schema documentation and no output schema, the description adequately covers the functional scope by listing the types of data searched. It appropriately leaves parameter details to the schema, though it could briefly mention that results include matching regulations and guidance documents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all 4 parameters (query, limit, jurisdiction, product_type). The description does not add additional semantic context beyond what the schema already provides (e.g., query syntax examples, jurisdiction defaults), warranting the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Full-text search' and clearly identifies the resource scope as 'all food safety data' including specific categories (regulations, labelling rules, etc.). It implicitly distinguishes from sibling tools like 'get_labelling_requirements' by emphasizing comprehensiveness ('across all'), though it doesn't explicitly name the alternative tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the comprehensive scope ('across all food safety data'), suggesting it should be used for broad searches covering multiple categories. However, it lacks explicit guidance on when to use this versus the specific 'get_' sibling tools (e.g., get_hygiene_requirements) for targeted lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.