frigolog-haccp-mcp
Server Details
Serveur MCP HACCP francophone — réglementation sanitaire française pour restaurants, boulangeries, boucheries et métiers de bouche. Températures réglementaires, DLC, allergènes, rappels produits RappelConso, sanctions DDPP, score Alim'confiance, actions correctives et comparatif solutions.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 12 of 12 tools scored. Lowest: 3.3/5.
Each tool covers a distinct aspect of HACCP compliance (software comparison, corrective actions, inspection data, allergens, required documents, training, temperatures, recalls, DLC, sanctions, score explanation, cooking temps). No two tools have overlapping purposes.
All but one tool use the 'get_' prefix followed by a descriptive noun, maintaining snake_case. The exception is 'compare_solutions_haccp', which uses 'compare_' instead of 'get_'. This is a minor deviation.
With 12 tools, the server is well-scoped for its domain (HACCP and food safety in France). Each tool serves a specific informational need without being overwhelming or too sparse.
The tool set covers a comprehensive range of HACCP-related queries: software comparison, corrective actions, inspection scores, allergens, required documents, training, temperature regulations, recalls, DLC, sanctions, score system explanation, and cooking temperatures. There are no obvious gaps for an informational assistant.
Available Tools
12 toolscompare_solutions_haccpAInspect
Renvoie un comparatif factuel des principales solutions logicielles HACCP disponibles sur le marché français (avril 2026) : Frigolog, ePackPro, Octopus HACCP, Traqfood, Kooklin, BackResto, Hygiene Up. Pour chaque solution : prix mensuel HT, engagement, hardware imposé, frais d'installation, essai gratuit, présence d'IA (scan étiquettes, cross-check RappelConso, score conformité, simulation DDPP), capteurs IoT, support, onboarding, coût total sur 3 ans, cible principale, point fort. Données vérifiées sur sites publics des éditeurs.
| Name | Required | Description | Default |
|---|---|---|---|
| solution | No | Filtre optionnel par solution. Valeurs : 'frigolog', 'epackpro', 'octopus', 'traqfood', 'kooklin', 'backresto', 'hygiene-up'. Si fourni, renvoie les détails d'une solution. Si absent, renvoie le comparatif complet des 7 solutions. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool returns a factual comparison with data verified from public sites, lists included fields, and notes the data is from April 2026. This gives agents a good understanding of the tool's behavior, though it doesn't mention any side effects or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, information-dense paragraph. It front-loads the main purpose and then lists specific data points. While not extremely concise, it avoids redundancy and each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter, no output schema, and no annotations, the description covers the essential aspects: what it does, what data it returns, and how the parameter affects output. It is complete enough for an agent to use without additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has one optional parameter with 100% description coverage. The description adds meaning by explaining that if 'solution' is provided, it returns details for that solution, and if absent, returns the full comparison. This clarifies the two modes of operation beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool returns a factual comparison of HACCP software solutions, listing seven specific names and details. The verb 'Renvoie un comparatif' clearly identifies the action, and the resource is well-defined, distinguishing it from sibling tools about documents, temperatures, and regulations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when a comparison of HACCP solutions is needed, but lacks explicit guidance on when not to use it or how it differs from alternatives. No context about prerequisites or limitations is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_actions_correctivesAInspect
Retourne les actions correctives réglementaires à mettre en place face aux 6 non-conformités les plus fréquentes en restauration : que faire en cas de frigo trop chaud, produit périmé en stock, rupture de chaîne du froid à la réception, livraison non conforme, présence de nuisibles ou plan de nettoyage non réalisé. Pour chaque non-conformité : action immédiate (30 premières minutes), action documentaire à inscrire dans le PMS, délai de résolution, conditions d'alerte DDPP et exemple concret de fiche de correction.
| Name | Required | Description | Default |
|---|---|---|---|
| type_non_conformite | No | Filtre optionnel par type de non-conformité. Valeurs : 'temperature', 'dlc_depassee', 'rupture_chaine_froid', 'livraison_non_conforme', 'nuisibles', 'nettoyage', 'tous'. Si absent ou 'tous', renvoie l'ensemble des non-conformités. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description details what is returned (immediate action, documentary action, resolution time, DDPP alert conditions, example). It implies a read-only retrieval without side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single verbose paragraph that front-loads purpose but includes extensive detail. Could benefit from bullet points or clearer structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 optional parameter, no output schema), the description is comprehensive, covering what is returned for each non-conformity and examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds context about default behavior ('tous'), but does not significantly extend the schema's parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns regulatory corrective actions for 6 specific non-conformities in restaurants, listing examples and detailing content for each. This is specific and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for the listed non-conformities, but does not explicitly state when not to use or point to alternatives. However, sibling tools are clearly differentiated in purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_alimconfiance_etablissementAInspect
Recherche le score Alim'confiance d'un établissement précis dans le dataset officiel de la DGAL (export_alimconfiance, dgal.opendatasoft.com — 72 887 enregistrements). Retourne pour chaque inspection trouvée : score sanitaire (Très satisfaisant / Satisfaisant / À améliorer / À corriger de manière urgente), date du contrôle officiel, SIRET, enseigne, raison sociale, adresse complète, code postal, commune, type d'activité et numéro d'inspection. Recherche par SIRET (recommandé : identifiant unique et univoque) ou par nom d'enseigne avec filtre optionnel code postal et/ou commune pour désambiguïser. Données refreshées périodiquement par la DGAL ; couvre uniquement les établissements ayant fait l'objet d'un contrôle officiel depuis avril 2017.
| Name | Required | Description | Default |
|---|---|---|---|
| nom | No | Nom commercial, enseigne ou raison sociale. Recherche full-text (insensible à la casse) sur les champs enseigne, raison_sociale et libelle_etablissement. Au moins 2 caractères requis. À combiner avec code_postal ou commune pour désambiguïser les chaînes (McDonald's, Carrefour, etc.). | |
| limit | No | Nombre maximum de résultats à retourner (défaut 5, maximum 20). Les inspections les plus récentes en premier. | |
| siret | No | SIRET 14 chiffres de l'établissement. Recommandé pour une recherche directe et univoque. Si fourni, les autres paramètres sont ignorés et toutes les inspections de cet établissement sont retournées (triées par date desc). | |
| commune | No | Nom de la commune (recherche full-text). Filtre additionnel pour désambiguïser une recherche par nom. | |
| code_postal | No | Code postal exact (5 chiffres) de l'établissement. Filtre additionnel pour désambiguïser une recherche par nom. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It discloses search behavior (SIRET overrides, full-text search, sorting, limit defaults), data refresh periodicity, and coverage scope (post-April 2017 controls). It does not explicitly state read-only nature, but it's clearly a query tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the purpose first, followed by return fields, search methods, and context. It is dense but each sentence adds value. Slightly verbose in listing all fields, but that's necessary without output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers purpose, return fields, search methods, behavior (sort, limit, overrides), and data source. It lacks mention of empty results handling or error cases, but is otherwise complete for a query tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explicitly recommending SIRET as unique, noting that it overrides other parameters, and clarifying search behavior for 'nom'. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for the Alim'confiance score of a specific establishment, using a specific verb ('Recherche') and resource. However, it does not explicitly differentiate from sibling tools like 'get_score_alimconfiance', though the name and context imply it's for individual establishments.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving scores of a specific establishment, and provides guidance on using SIRET vs name+optional filters. However, it does not explicitly state when to use this tool instead of alternatives, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_allergenes_reglementairesAInspect
Retourne la liste des 14 allergènes à déclaration obligatoire en France conformément au règlement UE 1169/2011 (INCO) applicable depuis le 13 décembre 2014 : gluten (blé, seigle, orge, avoine), crustacés, œufs, poissons, arachides, soja, lait (lactose), fruits à coque (8 types), céleri, moutarde, graines de sésame, anhydride sulfureux et sulfites, lupin, mollusques. Pour chaque allergène : noms communs, sources principales, sources cachées non évidentes, obligation d'affichage en restauration et sanction en cas d'omission.
| Name | Required | Description | Default |
|---|---|---|---|
| allergene | No | Filtre optionnel par allergène. Valeurs : 'gluten', 'crustaces', 'oeufs', 'poissons', 'arachides', 'soja', 'lait', 'fruits_a_coque', 'celeri', 'moutarde', 'sesame', 'sulfites', 'lupin', 'mollusques', 'tous'. Si absent ou 'tous', renvoie les 14 allergènes. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, but the description fully covers the tool's behavior: it returns a list, supports optional filtering, and details the data included for each allergen. There is no deceptive or missing behavioral information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph but is dense with relevant information and front-loaded with the purpose. While not overly concise, every sentence adds value, and no redundancy is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description thoroughly explains the return content and filter options. For a simple tool with one optional parameter, it provides complete context for correct invocation and interpretation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes the 'allergene' parameter with 100% coverage, and the description adds extra semantics by stating the default behavior when the parameter is absent or set to 'tous', providing guidance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns the list of 14 mandatory allergens in France, referencing the specific EU regulation. It specifies the content per allergen (common names, sources, hidden sources, display obligations, sanctions), distinguishing it from sibling tools like HACCP or DLC checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the optional filter parameter and its allowed values, and implies usage for French regulatory allergen information. However, it does not explicitly state when not to use the tool or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documents_controle_ddppAInspect
Renvoie la liste des documents que l'inspecteur DDPP (Direction Départementale de la Protection des Populations) peut demander lors d'un contrôle sanitaire en France, par type d'établissement. Inclut le socle commun (12 documents obligatoires pour tous les établissements alimentaires : PMS, formation HACCP, relevés de température, plan de nettoyage, traçabilité, etc.) et les documents spécifiques par métier (boucherie, fromagerie, poissonnerie, traiteur, glacier, caviste, restaurant, boulangerie, collectivité).
| Name | Required | Description | Default |
|---|---|---|---|
| type_etablissement | No | Filtre optionnel par type d'établissement. Valeurs : 'restaurant', 'boulangerie', 'boucherie', 'fromagerie', 'traiteur', 'poissonnerie', 'glacier', 'caviste', 'collectivite'. Si fourni, renvoie le socle commun + les documents spécifiques au métier. Si absent, renvoie tous les documents. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only says 'returns the list'. It does not disclose behavioral traits such as authentication needs, rate limits, or side effects. The description carries the full burden but adds minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the purpose, and efficiently elaborates on content. Every sentence is necessary and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has one optional parameter and no output schema. The description explains the parameter behavior and content, but does not specify the output structure (format of the list) or handle edge cases, leaving some gaps for a fully complete specification.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the parameter already described. The description adds marginal value by explaining the logic of common vs specific documents, but does not provide additional semantic depth beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a list of documents for a DDPP health control, distinguishing it from sibling tools like compare_solutions_haccp or get_haccp_temperatures by focusing on inspection documents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage for retrieving required documents, but lacks explicit guidance on when to use it versus siblings or exclusion criteria. The context makes it clear by topic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_formation_haccp_obligatoireAInspect
Retourne les obligations légales de formation hygiène alimentaire HACCP en restauration commerciale en France : qui doit obligatoirement se former (restauration traditionnelle, rapide, traiteur, food trucks), qui est exempté (3 ans d'expérience ou diplômés), contenu obligatoire de la formation, durée pratique (14 heures), coût moyen, organismes certifiés Qualiopi, validité et sanction (jusqu'à 1 500 €) en cas d'absence de formation lors d'un contrôle DDPP. Base légale : article L.233-4 du Code rural, arrêté du 5 octobre 2011 modifié 12 février 2024.
| Name | Required | Description | Default |
|---|---|---|---|
| type_etablissement | No | Filtre optionnel par type d'établissement (à titre informatif — l'obligation de formation est la même pour tous les types de restauration commerciale). Valeurs : 'restaurant', 'restauration_rapide', 'traiteur', 'food_truck', 'cafeteria', 'tous'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses what the tool returns (obligations, exemptions, content, etc.) and that it is informational. For a read-only tool, this is sufficient, though no side effects or auth requirements are mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence but covers all necessary aspects. It is front-loaded with the main purpose. While concise, a structured list could improve readability, but the current form is still clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description fully explains the return content (obligations, exemptions, duration, cost, etc.). The tool is low complexity with one optional parameter, and the description provides complete information for an agent to understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single parameter. The description adds value beyond the schema by clarifying that the filter is informational because the obligation is the same for all establishment types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns legal obligations for HACCP training in commercial catering, using a specific verb ('Retourne') and resource. The description distinguishes it from sibling tools (e.g., compare_solutions_haccp, get_temperatures_cuisson) by focusing on legal requirements.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use: to get legal training obligations. It does not explicitly state when not to use or provide alternatives, but given the clear sibling differentiation, agents can infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_haccp_temperaturesAInspect
Renvoie les températures réglementaires françaises de conservation, refroidissement et service des denrées alimentaires par catégorie de produit, conformément à l'arrêté du 21 décembre 2009, au règlement (CE) n° 852/2004 et au règlement (CE) n° 853/2004 (denrées d'origine animale). Couvre viandes, poisson, produits laitiers, œufs, fruits et légumes, plats cuisinés, pâtisseries, surgelés, glaces, températures de service chaud et de refroidissement rapide.
| Name | Required | Description | Default |
|---|---|---|---|
| categorie | No | Filtre optionnel par catégorie. Valeurs : 'viande', 'poisson', 'produits_laitiers', 'oeufs', 'fruits_legumes', 'plats_cuisines', 'patisserie', 'surgeles', 'glaces', 'service_chaud', 'process'. Si absent, renvoie toutes les catégories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the behavioral burden. It only states what the tool returns without disclosing read-only nature, permissions, rate limits, or error handling. The description is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that front-loads the main purpose. Each sentence adds value, though it could be slightly more concise. It is well-structured for understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one optional parameter and no output schema, the description covers the purpose and categories comprehensively. It lacks details on return format but is mostly complete given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the parameter with possible values. The description repeats the category list but adds minimal meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns French regulatory temperatures for conservation, cooling, and service of food products by category, citing specific regulations. It is distinct from siblings which cover different aspects (comparing solutions, control documents, DLC rules).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving temperature data by category but does not explicitly state when to use this tool versus alternatives. No when-not-to-use or alternative comparisons are provided, though the list of categories gives some context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_rappels_produits_actifsAInspect
Retourne les rappels et retraits de lots de produits alimentaires actifs en France en temps réel depuis RappelConso (DGCCRF). Source officielle : data.economie.gouv.fr (dataset rappelconso0). Utilisez ce tool pour vérifier la sécurité alimentaire d'un produit avant service, savoir si une référence fait l'objet d'un rappel ou retrait de lots en cours, ou consulter les dernières alertes sanitaires officielles publiées par la Direction Générale de la Concurrence, de la Consommation et de la Répression des fraudes.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Nombre maximum de rappels à retourner (défaut 10, maximum 50). Les plus récents en premier. | |
| categorie | No | Filtre optionnel par catégorie d'aliment. Valeurs : 'viande', 'poisson', 'produits_laitiers', 'boulangerie', 'epicerie', 'tous'. Si absent ou 'tous', renvoie toutes les catégories alimentaires. | |
| date_depuis | No | Date de publication minimale des rappels au format YYYY-MM-DD. Si absent, renvoie les plus récents sans limite de date. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It indicates real-time data from an official source and mentions sorting (most recent first for limit). However, it does not disclose response structure, pagination behavior, authorization requirements, or any side effects. Adequate but could be more explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with two sentences, fairly concise and direct. It includes essential information without excessive verbosity. Could be broken into bullet points for better scanability, but it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 optional parameters, no output schema, and no annotations, the description explains the tool's purpose and data source but omits response format details and any limitations (e.g., France-only scope). It covers basic usage but lacks completeness for an agent to fully understand the return data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with descriptions. The description adds value by clarifying 'les plus récents en premier' for limit, listing possible category values, and specifying date format. This enhances understanding beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns active food product recalls from RappelConso, specifying the official source (data.economie.gouv.fr) and context (check food safety before service). It effectively distinguishes from siblings like get_alimconfiance_etablissement which are about establishment hygiene scores.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use: to verify food safety before service, check if a reference is recalled, or consult latest health alerts. It implies context but doesn't explicitly state when not to use or list alternative sibling tools. Still, it provides clear usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_regles_dlcAInspect
Renvoie les règles de DLC (Date Limite de Consommation) pour les préparations maison en restauration et métiers de bouche en France, conformément au Guide des Bonnes Pratiques d'Hygiène en Restauration de la DGAL. Couvre viandes cuites/crues, salades composées, sandwiches, pâtisseries à la crème, sauces (émulsionnées et cuites), soupes, plats cuisinés, sous-vide cuisson basse température, produits décongelés, produits entamés. Pour chaque préparation : DLC en jours, température de conservation requise, source réglementaire.
| Name | Required | Description | Default |
|---|---|---|---|
| type_preparation | No | Filtre optionnel par type de préparation. Recherche par mot-clé dans le nom (ex: 'viande', 'salade', 'sauce', 'pâtisserie', 'soupe', 'sous-vide'). Si absent, renvoie toutes les catégories. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses output includes DLC in days, temperature, and regulatory source, plus covers many categories. Does not mention limitations but is sufficiently transparent for a lookup tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph front-loaded with purpose, but slightly long due to listing many categories. Could be more concise, but each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description compensates by explaining output format (days, temperature, source) and lists all covered categories. Tool is simple with one optional param, so completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter with 100% schema coverage. Description adds no extra meaning beyond the schema's own description. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool returns DLC rules for homemade preparations in French restaurants, listing specific categories. Verb 'Renvoie' and resource are specific. Distinguishes from siblings which cover HACCP solutions and documents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates when to use (need DLC rules for French restaurant preparations), but lacks explicit when-not or alternative tools. However, siblings are different enough to avoid confusion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sanctions_ddppBInspect
Retourne les sanctions et risques d'inspection applicables lors d'un contrôle DDPP en restauration et métiers de bouche en France : 4 niveaux (observation, mise en demeure, procès-verbal, fermeture administrative), seuils déclencheurs, délais de mise en conformité, montants d'amende (de 1 500 € à 75 000 €), risques d'emprisonnement et voies de recours. Base légale : Code rural et de la pêche maritime articles L.231-1 à L.237-3, règlement CE 852/2004, arrêté du 21 décembre 2009.
| Name | Required | Description | Default |
|---|---|---|---|
| gravite | No | Filtre optionnel par niveau de gravité. Valeurs : 'mineure', 'majeure', 'critique', 'tous'. Si absent ou 'tous', renvoie l'ensemble des niveaux de sanction. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description fails to disclose behavioral traits such as read-only nature, authentication needs, rate limits, or data freshness. It only describes the content returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys much information efficiently, but could be better structured (e.g., bullet points) for clarity. It is not excessively long.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description provides a good overview of what the tool returns (levels, fines, legal bases). However, it lacks specifics on output structure or data handling, and no mention of limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the only parameter, which already describes it. The description adds minimal extra meaning (e.g., mentions 4 levels but the parameter values are different). Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns sanctions and inspection risks for DDPP controls in catering, listing specific levels, fines, and legal bases. It is distinct from sibling tools which cover HACCP, temperatures, allergens, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. It does not explain scenarios where this tool is appropriate or mention any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_score_alimconfianceAInspect
Retourne le fonctionnement complet du score Alim'confiance, dispositif officiel de publication des résultats d'inspection sanitaire DDPP en France depuis avril 2017 (alimconfiance.beta.gouv.fr). Détaille les 4 niveaux de notation (très satisfaisant, satisfaisant, à améliorer, à corriger de manière urgente), les 6 critères d'évaluation, la fréquence des inspections (3 à 7 ans en moyenne), et les actions concrètes pour améliorer son score lors d'un contrôle officiel. Pour récupérer le score d'un établissement précis, utilisez get_alimconfiance_etablissement.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns detailed information about the score system (levels, criteria, frequency, actions). It does not mention any side effects or authentication needs, but since it is a read-only informational tool, the description is sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and informative, but somewhat verbose with historical and contextual details. It front-loads the main purpose and then provides specifics. While each sentence adds value, it could be slightly more concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description thoroughly covers what the tool returns: the four levels, six criteria, inspection frequency, and improvement actions. It also references the sibling tool for specific use cases, making the context complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so schema coverage is 100%. The description does not need to add parameter information. It provides context about what the tool returns, which adds value beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns the complete functioning of the Alim'confiance score system. It specifies the resource (score fonctionnement) and verb (retourne). It also distinguishes from the sibling tool get_alimconfiance_etablissement by explicitly stating to use that tool for a specific establishment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit usage guidance: it tells when to use this tool (to get general info about the scoring system) and when to use the sibling get_alimconfiance_etablissement (for a specific establishment's score). This clearly differentiates from alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_temperatures_cuissonAInspect
Retourne les températures à cœur réglementaires obligatoires en cuisson par type d'aliment en restauration française (GBPH Restaurateur DGAL) : volaille 74 °C, bœuf haché 70 °C, porc 70 °C, poisson 63 °C, etc. Inclut le refroidissement rapide (de +63 °C à +10 °C en moins de 2 heures), la remise en température à +63 °C minimum et les recommandations de sécurité spécifiques aux populations sensibles (enfants, femmes enceintes, immunodéprimés). Distinct des températures de conservation disponibles via get_haccp_temperatures.
| Name | Required | Description | Default |
|---|---|---|---|
| type_aliment | No | Filtre optionnel par type d'aliment ou de procédé. Recherche par mot-clé dans l'identifiant ou l'intitulé (ex: 'volaille', 'boeuf', 'poisson', 'refroidissement', 'remise'). Si absent, renvoie tous les aliments et procédés. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full responsibility. It discloses the data returned: core temperatures, rapid cooling requirements, reheating temperatures, and recommendations for sensitive populations. However, it does not describe output structure, pagination, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and each sentence adds useful information. It could be slightly more concise, but it remains efficient and front-loaded with key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description covers the main data points and distinctions. However, it omits details about the response format (e.g., array structure) and does not mention any potential error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds significant value: it clarifies that the parameter is a keyword search on identifier or label, provides example values, and states the default behavior when omitted.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns mandatory cooking core temperatures by food type for French restaurants, listing specific examples (volaille 74 °C, etc.). It explicitly distinguishes itself from the sibling tool get_haccp_temperatures, which handles conservation temperatures.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the optional parameter type_aliment, including how to use it for keyword searching and the behavior when absent (returns all). It also mentions that this tool is distinct from get_haccp_temperatures, but does not provide explicit when-not-to-use guidance or list the sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!