Skip to main content
Glama

france-data-mcp

Server Details

French territorial intelligence MCP: cross-reference health, business & geo public registries.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cturkieh/france-data-mcp
GitHub Stars
1
Server Listing
France Data MCP

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 30 of 30 tools scored. Lowest: 3.7/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct purpose (e.g., commune lookup, establishment density, professional search by source, reconciliation). Overlaps like multiple 'in_radius' tools are clearly separated by data source and entity type, with descriptions explicitly noting differences.

Naming Consistency4/5

Naming largely follows a verb_noun pattern, mixing English and French (e.g., 'autocomplete_commune', 'lister_specialites_ameli'), which is consistent for the French domain. Minor inconsistencies like 'data_freshness' vs 'etablissement_by_finess' but overall predictable.

Tool Count4/5

30 tools is on the high side but justified by the number of data sources (FINESS, RPPS, Ameli, SIRENE, INSEE) and the variety of query types (density, radius, department listings, reconciliation). Each tool serves a clear, non-redundant function.

Completeness5/5

The tool set covers the full lifecycle of French health data queries: lookup by identifiers (FINESS, SIRET, RPPS), spatial searches (radius, department), density calculations, data freshness monitoring, cross-referencing between sources, and historical reconstruction. No obvious gaps for a read-only data server.

Available Tools

30 tools
autocomplete_communeA
Read-onlyIdempotent
Inspect

Recherche de communes françaises par nom, code postal ou code INSEE. Idéal pour autocomplétion. Source : geo.api.gouv.fr (DINUM/Etalab).

ParametersJSON Schema
NameRequiredDescriptionDefault
nomNoRecherche par nom (autocomplétion).
codeNoCode INSEE exact (5 caractères).
limitNoNombre max de résultats (1-30, défaut 10).
codePostalNoCode postal exact (5 chiffres).
boostPopulationNoTrier par population décroissante. Recommandé pour les noms ambigus (ex: 'Charleville').
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, fully covering behavioral expectations. The description adds the data source (geo.api.gouv.fr) but no additional behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two short sentences, front-loaded with the core purpose, and contains no superfluous information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with no output schema, the description covers the search dimensions and data source. It does not specify what fields are returned, but the schema provides some hints. Minor gap for a fully self-contained description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, each parameter is described. The description adds no extra meaning beyond summarizing search criteria (nom, code postal, code INSEE). Baseline 3 applies as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches French communes by name, postal code, or INSEE code, and explicitly says it's ideal for autocompletion. This distinguishes it from sibling tools like get_commune_by_code (exact lookup) or health-professional tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Idéal pour autocomplétion' and mentions the boostPopulation parameter for ambiguous names, providing context for use. However, it does not explicitly state when not to use it or compare to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_raison_sociale_finess_vs_rppsA
Read-onlyIdempotent
Inspect

Compare la raison sociale FINESS DREES vs RPPS / Annuaire Santé ANS pour un même num_finess. Primitive brute SANS interprétation métier — retourne juste les deux libellés + un statut de comparaison. Le caller décide quoi faire de la divergence.

Utilité : RPPS reflète souvent plus rapidement les rebrandings post-M&A que FINESS DREES (ex: un site racheté reste 'DIAGNOVIE' chez DREES alors qu'il est déjà 'BIOGROUP NORD' chez l'ANS). Ce tool expose la divergence factuelle ; il NE DIT PAS qui a racheté qui (ça repose sur de la connaissance d'enseignes commerciales non publique).

Statut renvoyé (champ statut présent uniquement sur la branche found: true) :

  • exact_match : FINESS et ≥1 RPPS sont strictement égaux après normalisation

  • divergent_after_normalization : aucune RPPS ne matche FINESS — vraie divergence

  • rpps_absent : aucune RPPS n'a déclaré ce FINESS (pivot impossible)

Format : objet LookupResult discriminé par found. Quand num_finess est absent de FINESS DREES, le tool retourne {found: false, lookupStatus: 'not_found', message, ...} — il n'y a PAS de champ statut dans ce cas.

ParametersJSON Schema
NameRequiredDescriptionDefault
num_finessYesNuméro FINESS exact (9 chiffres).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds substantial value by detailing the return format (discriminated union with 'found' and conditional 'statut'), explaining status values ('exact_match', 'divergent_after_normalization', 'rpps_absent'), and stating the behavior when num_finess is absent from FINESS DREES. This exceeds what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose, usage hint, and bulletized status explanations. It front-loads the main purpose and adds necessary detail without fluff. Slightly verbose but every sentence contributes value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (comparing two data sources, multiple status conditions, discriminated output), the description covers all essential aspects: what it does, when it's useful, what it returns in each case, and what it avoids. The presence of an output schema (not shown) further complements completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (parameter 'num_finess' described as 9-digit exact number). The description does not add new semantic information about the parameter beyond what the schema already provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 'raison sociale' from FINESS DREES and RPPS for a given FINESS number. It distinguishes itself from sibling tools like 'etablissement_by_finess' and 'professionnel_by_rpps' by focusing on cross-reference comparison. The verb 'compare' and the specific data sources are explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context for usage: it explains that RPPS reflects rebrandings faster than FINESS DREES, and explicitly states that the tool does not interpret the divergence (leaves decision to caller). It also clarifies what the tool does NOT do (e.g., not saying who bought whom). However, it does not mention alternative tools or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

data_freshnessA
Read-only
Inspect

Retourne la fraîcheur des dumps de données ingérés côté serveur : FINESS DREES (bimestriel), Annuaire Santé Ameli (hebdomadaire), RPPS / Annuaire Santé ANS (mensuel). Pour chaque source : last_success_at ISO timestamp, last_success_row_count, last_attempt_at, last_attempt_status, staleness_days (jours depuis la dernière ingestion réussie), cadence_hint (cadence attendue côté éditeur).

Usage typique : avant un audit territorial ou une analyse temporelle, le caller appelle ce tool pour savoir si les données sont à jour. Une staleness_days > 90 côté FINESS = alerte (dernier sync DREES manqué), > 14 côté Ameli = alerte (job hebdo cassé), > 45 côté RPPS = alerte (job mensuel cassé).

Les sources LIVE (DINUM Recherche Entreprises, INSEE SIRENE V3.11, ANS FHIR live) ne sont PAS listées ici puisqu'elles n'ont pas de cycle d'ingestion — leur fraîcheur est celle des API amont (live, ~secondes).

Cache serveur : 5 minutes. Coût : 1 SELECT sur ingest_log au pire (sinon hit cache).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
sourcesYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, openWorldHint), the description adds cache behavior (5-minute cache), cost (1 SELECT or cache hit), and notes that live sources are excluded. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise but slightly verbose. It lists fields and usage in a single paragraph, which is informative but could be structured more clearly with bullet points. Still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema provided, the description fully covers the return fields, usage context, and alert thresholds. It compensates for missing output schema details and is complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist (schema coverage 100%). The description does not need to explain parameters, and the baseline score of 4 applies due to zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns the freshness of ingested data dumps for specific sources (FINESS DREES, Annuaire Santé Ameli, RPPS/ANS). It lists the returned fields and explicitly excludes live sources, making its purpose distinct from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides typical use cases (before territorial audit or temporal analysis) and gives alert thresholds for staleness per source. However, it does not explicitly contrast with sibling tools or state when not to use it, though the exclusion of live sources implies a boundary.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

densite_etablissements_santeA
Read-onlyIdempotent
Inspect

Densité d'établissements de santé pour 100 000 habitants dans un département, par famille FINESS. Croise FINESS DREES (count) et INSEE Melodi (population municipale PMUN, recensement 2023).

Familles disponibles : labo (laboratoires de biologie médicale), pharmacie, ehpad, mco (court séjour médecine/chirurgie/obstétrique), ssr (soins de suite), psychiatrie, dialyse, imagerie, had (hospitalisation à domicile), msp_cpts (maisons de santé + CPTS), handicap_enfants, handicap_adultes, addictologie, pmi, prevention_sante, etc. Famille obligatoire — sans filtre, le ratio mélangerait labos / hôpitaux / EHPAD et n'aurait pas de sens.

compare_national: true ajoute la densité France entière (DOM inclus) + écart en %. Coût : 1 RPC count_finess + 1 appel Melodi (cacheable).

ParametersJSON Schema
NameRequiredDescriptionDefault
familleYesFamille FINESS à compter (labo, pharmacie, ehpad, mco, ssr, psychiatrie, dialyse, imagerie, had, msp_cpts, handicap_enfants, handicap_adultes, addictologie, pmi, prevention_sante, etc.).
code_deptYesCode INSEE du département (2-3 caractères, ex: '75', '13', '2A', '971').
compare_nationalNoAjoute le calcul France entière + écart relatif en % (recommandé pour 'sous-doté'/'sur-doté').
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, safe. The description adds valuable context: data sources (FINESS DREES, INSEE Melodi), year (2023), and cost of compare_national parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is succinct: one sentence for purpose, one for families, one for why filter is needed, and one for optional parameter. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, no output schema, and comprehensive annotations, the description fully covers what the tool does, why parameters are needed, and behavioral implications like cost.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description enriches each parameter: lists all families for 'famille', explains that 'compare_national' adds national density and deviation, and clarifies default behavior without filter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes density of health establishments per 100,000 population by FINESS family. It lists families and distinguishes from sibling tool 'densite_professionnels_sante' which focuses on professionals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explains why a family is required (otherwise ratio meaningless) and describes the optional compare_national parameter. However, it does not explicitly state when to use this vs. the professional density sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

densite_professionnels_santeA
Read-onlyIdempotent
Inspect

Densité de professionnels de santé pour 100 000 habitants dans un département. Méthodo DREES par défaut : médecins (profession_code='10') en activité régulière (libéral + salarié + mixte, codes mode_exercice L, S, M), hors étudiants. Croise RPPS (count) et INSEE Melodi (population municipale PMUN, recensement 2023).

Usages : densité de cardiologues / dermatologues / infirmiers libéraux / pharmaciens / sages-femmes par dept. Pour une spécialité médicale, passer savoir_faire_code (ex SM02 cardiologie). Pour une autre profession que médecin, passer profession_code (60 infirmier, 21 pharmacien, etc.). Pour libéraux seuls, passer mode_exercice_codes: ['1'].

compare_national: true ajoute la densité France entière (DOM inclus) et l'écart en % (positif = sur-doté vs France, négatif = sous-doté). Coût : 1 RPC count_rpps supplémentaire + 1 appel Melodi (cacheable).

Ne renvoie AUCUNE interprétation métier (pas de seuil "désert médical" automatique). Le caller applique sa grille.

Par défaut, ne renvoie que les PS de catégorie Civil (C) — droit privé : libéraux, salariés privés, hospitaliers contractuels, ~97 % de la base. Passer include_agents_publics: true pour inclure aussi les Agents publics (M) — fonctionnaires d'État + collectivités + militaires SSA, ~0,3 % (PH titulaires, médecins inspecteurs ARS, médecins conseils CNAM, médecins scolaires Éducation nationale, médecins PMI). Passer include_etudiants: true pour inclure aussi les Étudiants (E) — internes, externes, élèves IDE/SF, ~2,5 %. Source nomenclature : https://mos.esante.gouv.fr/NOS/TRE_R09-CategorieProfessionnelle/.

Source : Annuaire Santé, Agence du Numérique en Santé (ANS) — Licence Ouverte v2.0

ParametersJSON Schema
NameRequiredDescriptionDefault
code_deptYesCode INSEE du département (2-3 caractères, ex: '75', '13', '2A', '971').
profession_codeNoCode profession ANS (TRE_R94). Default '10' (Médecin). Ex : '60' Infirmier, '21' Pharmacien, '50' Sage-femme, '40' Chirurgien-dentiste, '70' Masseur-kinésithérapeute.
compare_nationalNoAjoute le calcul France entière + écart relatif en % (recommandé pour qualifier 'sous-doté'/'sur-doté').
include_etudiantsNo
savoir_faire_codeNoCode spécialité (savoir_faire). Pertinent surtout pour profession_code=10 (médecin). Ex : 'SM02' Cardiologie, 'SM26' Dermato-vénérologie. Voir lister_specialites_medicales (V0.8 Phase 4).
mode_exercice_codesNoCodes mode_exercice ANS à inclure. Default ['L','S','M'] (libéral + salarié + mixte = activité régulière DREES). Passer ['L'] pour libéraux seuls. Codes mode_exercice ANS : L libéral, S salarié, M mixte, R remplaçant, B bénévole, A autre.
include_agents_publicsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it details the DREES methodology, data sources (RPPS, INSEE Melodi), cost in RPC calls, and clarifies it returns no interpretation. Consistent with readOnlyHint, idempotentHint, destructiveHint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but slightly verbose; however, every sentence adds value. It is front-loaded with purpose and organized logically, though a more structured format (bullets) could improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema), the description is very complete: it explains methodology, sources, cost, caveats (no interpretation), and parameter interactions. It adequately substitutes for missing output schema by describing the result (density, optional national comparison).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description enriches all parameters with context, default values, examples (e.g., 'SM02' for cardiology), and explanations of categories (include_agents_publics, include_etudiants). For the two parameters missing schema descriptions, the tool text compensates fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes 'Densité de professionnels de santé pour 100 000 habitants dans un département', a specific verb+resource. It distinguishes from siblings by focusing on density per department rather than listings or raw counts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit usage examples (cardiologues, dermatologues, etc.) and instructions for adjusting parameters by profession, specialty, or mode. It references an alternative tool for listing specialties, but does not explicitly state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entreprise_by_sirenA
Read-onlyIdempotent
Inspect

Récupère le détail d'une entreprise française par son SIREN (9 chiffres) : raison sociale, NAF, finances historiques, dirigeants, établissements. Source : DINUM Recherche Entreprises.

Format de retour : objet LookupResult discriminé par found.

  • found: true → l'entreprise est retournée à plat (champs siren, nomComplet, etablissements, enrichmentStatus, …)

  • found: false{ found: false, key, lookupStatus: 'not_found' | 'ambiguous', message }. not_found : SIREN non indexé par DINUM (souvent diffusion partielle INSEE — l'entreprise peut quand même exister dans SIRENE). ambiguous : régression API à signaler.

⚠️ Quand found: true, la liste etablissements peut être tronquée. Le champ nombreEtablissements (compté SIRENE) reflète le total réel. Lire enrichmentStatus pour savoir si la liste est complète :

  • success : etablissements contient tous les sites

  • partial : sites manquants (multi-département ou NAF différent du siège) — voir enrichmentWarning

  • failed : l'enrichissement a échoué (rate limit, panne API) — seul le siège est listé

  • not_attempted : entreprise monosite ou data SIRENE manquante

Pour énumération exhaustive multi-département, utiliser entreprises_in_radius par zone géographique. Coût : 1 ou 2 appels API DINUM par invocation (rate limit ~1 req/s effectif).

ParametersJSON Schema
NameRequiredDescriptionDefault
sirenYesSIREN exact, 9 chiffres.

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint, idempotentHint, and no destruction. The description adds rich behavioral details: enrichment status meanings (success, partial, failed, not_attempted), potential truncation of etablissements, cost of 1-2 API calls, and rate limit of ~1 req/s. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. It includes necessary details (return format, enrichment logic, alternatives) but is slightly verbose; however, every sentence adds value, earning a 4.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (discriminated union output, enrichment status, edge cases), the description covers all necessary context: return format, possible lookupStatus values, enrichment behavior, alternative tool, and API constraints. The output schema exists but the description adds the discriminated union details, making it complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with a clear 'SIREN exact, 9 chiffres.' The description repeats the format but adds no new meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Récupère le détail d'une entreprise française par son SIREN', specifying the verb and resource. It distinguishes from siblings by explicitly recommending 'entreprises_in_radius' for exhaustive multi-département enumeration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use (valid SIREN for company details) and when-not-to-use (exhaustive multi-département, use alternative). It also explains edge cases like not_found due to INSEE diffusion and notes rate limits, giving clear context for decision making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entreprises_in_radiusA
Read-onlyIdempotent
Inspect

Recherche d'entreprises françaises avec filtres NAF, code postal, département ou rayon géographique. Couvre tous secteurs (santé via NAF 8690B, 4773Z, 8710A, 8621Z, etc.). Source : DINUM Recherche Entreprises (SIRENE + RNE). Renvoie CA, dirigeants, tranches d'effectif et dates de création.

Limitation API DINUM : la combinaison naf + lat/lon/radiusKm n'est pas supportée nativement (lat/lon nécessitent un q textuel). Le serveur applique alors un fallback : reverseGeocode du point → recherche par département → filtrage Haversine côté serveur. Les résultats sont limités aux 25 premières entreprises du NAF dans le département (limite API).

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoRecherche textuelle libre (raison sociale, dirigeant…).
latNoLatitude du centre du cercle de recherche.
lonNoLongitude du centre du cercle de recherche.
nafNoCode NAF principal (ex: '8690B' = labos, '4773Z' = pharmacies, '8710A' = EHPAD, '8621Z' = MG).
pageNoPage (1-indexed).
perPageNoRésultats par page (1-25, défaut 10).
radiusKmNoRayon en km (1-50).
codePostalNoFiltre alternatif : code postal exact.
departementNoFiltre alternatif : code département.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pageYes
totalYesTotal d'entreprises matchant la query côté DINUM.
perPageYes
fallbackNoPrésent uniquement si le serveur a appliqué le fallback reverseGeocode + Haversine.
totalPagesYes
entreprisesYesEntreprises retournées (SIREN, nomComplet, NAF, finances, etablissements).
truncated_by_per_pageNotrue si le post-filtre Haversine a tronqué pour respecter `perPage`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark the tool as read-only, idempotent, and non-destructive. The description adds valuable behavioral context: the API fallback behavior and result limits (25 per NAF in department). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: first sentence states purpose, second adds coverage and source, third explains the critical limitation. Every sentence earns its place; no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, output schema exists), the description covers the main purpose, data source, and a key limitation. Pagination details are implicit (page/perPage), but the output schema provides return structure. Adequately complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no additional parameter-level details beyond what the schema provides, but it does give context on NAF codes (e.g., health sectors). This is acceptable but not above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for French companies with filters (NAF, postal code, department, radius), covering all sectors. It is distinct from sibling tools like professionnels_in_radius or etablissements_finess_in_radius which target health professionals or facilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance by detailing a limitation: the combination naf+lat/lon/radiusKm is not supported natively and triggers a fallback with results limited to 25 companies. This informs when to use the tool and its constraints, though no explicit when-to-use or alternatives are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

etablissement_by_finessA
Read-onlyIdempotent
Inspect

Récupère le détail complet d'un établissement de santé par son numéro FINESS (9 chiffres) : raison sociale, catégorie + famille, adresse complète (voie + CP + ville + code INSEE + département), coordonnées GPS, téléphone. Retourne un objet LookupResult discriminé par found. found: true → champs FINESS à plat. found: false{ found: false, key, lookupStatus: 'not_found', message }. Le référentiel DREES a 1-2 mois de retard sur le terrain : pour des structures émergentes (CPTS récentes, MSP en agrément), cross-check ARS / Service Public. Source : FINESS / DREES. Note : champ email toujours null (non exposé par FINESS public).

ParametersJSON Schema
NameRequiredDescriptionDefault
num_finessYesNuméro FINESS exact (9 chiffres).
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds behavioral details: returns a LookupResult discriminated by found, explains both found=true and false responses, mentions data freshness opt-in, cache behavior, and the email always null. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but slightly verbose (e.g., listing all fields). It is well front-loaded with the main action, but could be trimmed slightly without losing clarity. Still, it is well-structured and informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a lookup with discriminated result, data freshness opt-in, and data latency considerations, the description covers all necessary aspects. The output schema exists, so return values are not over-explained. It provides complete context for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaning beyond schema: num_finess is specified as exactly 9 digits, and include_freshness explains payload adjustment and cache behavior. This enriches understanding beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves the complete details of a healthcare establishment by its FINESS number, listing specific fields (social reason, category, address, GPS, phone). It distinguishes from siblings by focusing on a single FINESS number, while sibling tools handle categories or radius searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use when you have a specific FINESS number, but does not explicitly state when to use this tool versus alternatives. It provides caveats about data latency and recommends cross-checking, but lacks direct guidance on when not to use it or comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

etablissement_by_siretA
Read-onlyIdempotent
Inspect

Récupère le détail d'un établissement par son SIRET (14 chiffres) via l'API SIRENE INSEE V3.11 : raison sociale de l'unité légale, enseigne commerciale, NAF de l'établissement, dates de création/fermeture, statut administratif actif/fermé, adresse complète, tranche d'effectif. Source : SIRENE INSEE V3.11 (api.insee.fr).

Format de retour : objet LookupResult discriminé par found.

  • found: true → établissement à plat (siret, siren, actif, dateFermeture, enseigne, adresse, …)

  • found: false{ found: false, key, lookupStatus: 'not_found', message }. Cas typiques : clé INSEE_SIRENE_API_KEY non configurée côté serveur (message explicite), SIRET inexistant SIRENE, diffusion partielle INSEE.

⚠️ Différence avec entreprise_by_siren : ce tool renvoie UN établissement précis (un site), alors que entreprise_by_siren renvoie l'unité légale + sa liste d'établissements. Pour détecter un SIRET fermé encore listé actif côté FINESS, lire actif: false + dateFermeture.

Pas de coords : l'endpoint INSEE /siret/<siret> ne renvoie pas les coordonnées GPS. Pour géolocaliser, croiser avec geocode_adresse côté caller ou utiliser entreprises_in_radius.

Rate limit INSEE : 30 req/min (retry-after géré côté serveur).

ParametersJSON Schema
NameRequiredDescriptionDefault
siretYesSIRET exact, 14 chiffres.

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description goes beyond annotations (readOnlyHint, etc.) by detailing return format (LookupResult with found true/false cases), error messages (missing API key, not found, diffusion partielle), and behavioral constraints (no GPS coordinates, rate limit).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured with clear sections, emoji warnings, and bullet-like formatting. Though lengthy, every sentence adds value; front-loaded with main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists, description explains both cases (found/not found). Covers edge cases (API key missing, diffusion partielle) and provides cross-tool comparison. Fully adequate for context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'siret' is fully covered by schema (100% description), and the description adds value by restating '14 chiffres' and reinforcing that it's used for lookup. No additional parameter documentation needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'Récupère' and resource 'établissement par son SIRET', lists data fields (raison sociale, NAF, dates, etc.), and distinguishes from sibling 'entreprise_by_siren' by clarifying it returns a single site vs legal unit with list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool vs 'entreprise_by_siren' and 'geocode_adresse' for coordinates. Provides guidance on interpreting 'actif: false' for FINESS cases. Also notes rate limit (30 req/min) and server-side handling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

etablissements_finess_by_categorieA
Read-onlyIdempotent
Inspect

Liste des établissements FINESS par famille, avec filtre département ou commune optionnel. Pas de rayon — pour énumération exhaustive d'une zone administrative. 24 familles disponibles : mco, ssr, sld, had, psychiatrie, dialyse, ambulatoire, labo, imagerie, pharmacie, msp_cpts, ehpad, residence_autonomie, senior_accompagnement, ssiad, aide_domicile, handicap_enfants, handicap_adultes, addictologie, enfance_protection, pmi, hebergement_social, prevention_sante, groupement. Source : FINESS / DREES. Note : champ email toujours null (non exposé par FINESS public).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNombre max de résultats (1-500, défaut 100).
categorieYesFamille FINESS recherchée (24 valeurs disponibles, voir enum).
code_inseeNoCode INSEE de commune (5 caractères). Optionnel.
departementNoCode département INSEE (ex: '75', '2A', '2B', '971'). Métropole 2 caractères (Corse '2A'/'2B', pas '20'), DOM/TOM 3 caractères. Optionnel.
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnly, openWorld, idempotent, and non-destructive. The description adds value by noting the 'email' field is always null due to FINESS public limitations and explaining the 'include_freshness' option's caching and overhead. This provides behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences plus a list of categories and notes. Key information is front-loaded (what the tool does and when to use it). The list is necessary but slightly verbose. Overall efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters, 24-value enum, optional filters, and a data freshness option, the description covers main usage, limitations, and optional behavior. An output schema exists, so return-value details are not needed. A minor gap: no mention of default ordering or pagination behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 5 parameters have schema descriptions (100% coverage). The description adds minimal extra meaning, such as listing all enum values (redundant) and explaining the 'include_freshness' parameter's caching behavior. Baseline 3 is appropriate since schema already documents parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists FINESS establishments by category, with optional department/municipality filters. It distinguishes from radius-based tools by explicitly saying 'Pas de rayon — pour énumération exhaustive d'une zone administrative.' This makes the purpose specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear use case: exhaustive enumeration of an administrative area, contrasting with radius-based tools. It lists 24 available categories. However, it does not explicitly mention when to use sibling tools like 'etablissements_finess_in_radius' or 'professionnels_in_radius' as alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

etablissements_finess_in_radiusA
Read-onlyIdempotent
Inspect

Recherche d'établissements de santé FINESS dans un rayon géographique (PostGIS ST_DWithin). Filtrable par familles. 24 valeurs disponibles : mco, ssr, sld, had, psychiatrie, dialyse, ambulatoire, labo, imagerie, pharmacie, msp_cpts, ehpad, residence_autonomie, senior_accompagnement, ssiad, aide_domicile, handicap_enfants, handicap_adultes, addictologie, enfance_protection, pmi, hebergement_social, prevention_sante, groupement. Source : FINESS / DREES (dump CSV ingéré localement). Note : champ email toujours null (non exposé par FINESS public).

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude du centre (WGS84).
lonYesLongitude du centre (WGS84).
limitNoNombre max de résultats (1-500, défaut 100).
famillesNoFamilles FINESS à inclure (24 valeurs disponibles, voir enum). Si omis, toutes catégories.
radius_kmNoRayon en km (0.1-50, défaut 5).
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint false. The description adds useful context: the data source (FINESS/DREES), the use of PostGIS ST_DWithin, and the fact that the email field is always null. This goes beyond the annotations, though it does not describe rate limits or cost.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core action. It includes a list of 24 family values, which is necessary for usability but could be slightly trimmed. Overall, every sentence serves a purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, required output schema) and rich schema descriptions, the description adds meaningful context: data source, null email behavior, and a hint about caching. The output schema exists, so not explaining return values is acceptable. Minor omission: no mention of performance or typical use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter well-documented in the schema. The main description lists the 24 family values (also in the schema enum) but does not add new semantics beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching for FINESS healthcare establishments within a geographic radius using PostGIS. It specifies the resource (établissements FINESS), the action (recherche dans un rayon), and differentiates from sibling tools like etablissement_by_finess (exact ID) and etablissements_finess_by_categorie (category-only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for radius-based searches but does not explicitly state when to use this tool vs alternatives or when not to use it. It lists families and mentions the null email field but lacks exclusion criteria or comparative guidance. Sibling tools exist but are not referenced.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

finess_sirene_coverage_in_radiusA
Read-onlyIdempotent
Inspect

Compare la couverture du référentiel FINESS DREES (sites physiques agréés LBM/pharmacie/etc.) au référentiel SIRENE DINUM (SIRET physiques actifs au NAF cible) dans un rayon géographique. Métrique : ratio sites FINESS / SIRET SIRENE. Utile pour détecter une sur-déclaration FINESS (sites encore listés mais SIRET fermés) ou une sous-déclaration DREES (sites SIRENE non agréés FINESS). Inclut une méthodologie explicite + caveats. Source : FINESS DREES + DINUM Recherche Entreprises + SIRENE INSEE.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude WGS84 du centre de la zone.
lonYesLongitude WGS84 du centre de la zone.
nafYesCode NAF SIRENE à comparer (ex: '8690B' labos d'analyses médicales, '4773Z' pharmacies, '8621Z' médecine générale).
famillesNoFamilles FINESS à inclure côté DREES (défaut : toutes). Valeurs : mco, ssr, sld, had, psychiatrie, dialyse, ambulatoire, labo, imagerie, pharmacie, msp_cpts, ehpad, residence_autonomie, senior_accompagnement, ssiad, aide_domicile, handicap_enfants, handicap_adultes, addictologie, enfance_protection, pmi, hebergement_social, prevention_sante, groupement.
radius_kmNoRayon de la zone en km (0.1-50, défaut 5).
max_unites_legalesNoNombre maximum d'unités légales DINUM à déplier (1-25, défaut 10). Au-delà : truncated_unites_legales=true.

Output Schema

ParametersJSON Schema
NameRequiredDescription
caveatsNoLimitations méthodologiques explicites (discipline zéro overclaim).
methodologyYesDescription LLM-friendly de l'algorithme appliqué.
finess_sitesYesNombre de sites FINESS dans le rayon (référentiel DREES).
matched_countNoNombre de matchs greedy Dice ≥ 0.7.
sirene_siretsYesNombre de SIRET physiques actifs au NAF cible dans le rayon (DINUM/SIRENE).
coverage_ratioYesmatched / finess_sites ∈ [0, 1]. null si `sirene_sirets === 0` (zone rurale + NAF rare → ratio non calculable).
matched_samplesNo
finess_only_countNo
sirene_only_countNo
finess_only_samplesNo
sirene_only_samplesNo
truncated_unites_legalesNotrue si le cap `maxUnitesLegales` a été atteint avant énumération complète.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Les annotations indiquent readOnlyHint=true, idempotentHint=true, openWorldHint=true, et destructiveHint=false. La description ajoute des comportements importants : la méthodologie de calcul, l'inclusion de caveats, et la mention des sources (FINESS DREES + DINUM + SIRENE INSEE). Cependant, elle ne précise pas la gestion des limites (comme le tri des unités légales au-delà du max_unites_legales) ni le format de la réponse. Avec des annotations riches, la description complète bien mais pourrait être plus explicite sur le comportement en cas de limite atteinte.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

La description est relativement concise (environ 80 mots) et bien structurée : elle commence par l'action principale, puis définit la métrique, les cas d'usage, et mentionne les sources et caveats. Chaque phrase apporte une information utile. Elle pourrait être légèrement raccourcie sans perte d'information, mais elle reste efficace. Pas de répétition ni de contenu superflu.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Le contexte est très complet : la description couvre l'objectif, la métrique, les cas d'usage, la méthodologie, les sources, et les caveats. Un schéma de sortie existe probablement pour documenter la structure de réponse, ce qui allège la description. Compte tenu de la complexité de l'outil (comparaison de deux référentiels), la description fournit toutes les informations nécessaires à un agent pour décider de l'utiliser.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

La couverture du schéma est de 100% : chaque paramètre a une description. La description de l'outil n'ajoute pas d'informations spécifiques aux paramètres au-delà de ce qui est déjà dans le schéma. Elle mentionne des exemples de codes NAF et les familles, mais cela reste général. Conformément à la règle, avec une couverture >80%, la baseline est 3. La description n'apporte pas de valeur ajoutée significative sur la sémantique des paramètres.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

La description indique clairement l'action : comparer la couverture des référentiels FINESS et SIRENE dans un rayon géographique, avec une métrique précise (ratio sites FINESS/SIRET SIRENE) et les cas d'usage (détection de sur ou sous-déclaration). Elle se distingue efficacement des outils frères comme 'etablissements_finess_in_radius' ou 'entreprises_in_radius' en spécifiant la comparaison inter-référentiel.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

La description donne un contexte d'utilisation clair ('détecter une sur-déclaration FINESS ou une sous-déclaration DREES'), mais ne mentionne pas explicitement quand ne pas utiliser cet outil ni les alternatives parmi les siblings (par ex. 'reconcilier_finess_sirene' qui pourrait être plus adapté pour une réconciliation point par point). Le contexte est suffisant pour un agent, mais une exclusion explicite renforcerait la note.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

geocode_adresseA
Read-onlyIdempotent
Inspect

Géocode une adresse française en coordonnées GPS. Source : IGN Géoplateforme (data.geopf.fr). Précision au numéro de rue.

ParametersJSON Schema
NameRequiredDescriptionDefault
adresseYesAdresse complète à géocoder.
codePostalNoOptionnel — limiter le résultat à un code postal pour désambiguïser.
codeCommuneNoOptionnel — limiter au code INSEE de commune.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate safety, readOnly, idempotent. Description adds context about source data and precision, no contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Purpose is front-loaded, source and precision follow efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple geocoding tool with 3 params and no output schema. Could mention return format but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and descriptions are clear. The description adds only minor context about precision beyond schema; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it geocodes a French address to GPS coordinates, with source and precision level. Distinguishes from sibling 'reverse_geocode'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes what it does and source, but does not explicitly state when to use it over alternatives like reverse_geocode or commune tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_commune_by_codeA
Read-onlyIdempotent
Inspect

Récupère une commune par son code INSEE. Retourne un objet LookupResult discriminé par found. found: true → champs commune à plat (nom, codesPostaux, centre…). found: false{ found: false, key, lookupStatus: 'not_found', message } orientant vers autocomplete_commune pour disambiguer.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesCode INSEE (5 caractères).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide safety and idempotency hints. The description adds significant behavioral detail about the discriminated return type (found vs not_found) and the fields in each case, which goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no waste. First sentence states purpose, second explains return shape. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with one parameter, the description covers purpose, input, and return behavior (including error case). It is complete given the presence of an output schema and good annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the 'code' parameter as 'Code INSEE (5 caractères).' The description adds no extra meaning beyond that, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves a commune by INSEE code with a specific verb and resource. It distinguishes from sibling 'autocomplete_commune' by directing disambiguation there on not_found.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states to use this tool when you have an INSEE code, and on not_found it suggests using autocomplete_commune for disambiguation. This provides clear when-to-use and an alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

historique_etablissementA
Read-onlyIdempotent
Inspect

Reconstitue la timeline complète d'un établissement de santé (ouvertures, fermetures, changements de NAF/enseigne) en croisant FINESS DREES ↔ resolver SIRET (RPPS + DINUM) ↔ SIRENE INSEE V3.11. Lit les periodesEtablissement complètes pour chaque SIRET candidat.

V0.7.0 : SIRET candidats élargis via le resolver — inclut désormais les SIRET fermés du SIREN parent qui matchent l'adresse FINESS (invisibles côté RPPS seul). Permet de tracer la fermeture exacte d'un site même quand FINESS le liste encore actif.

Usage typique :

  • Tracer l'historique d'un site après une fusion-acquisition

  • Identifier la date de fermeture exacte d'un SIRET encore listé actif côté FINESS

  • Comprendre une cascade de rebrandings via les changements de enseigne1Etablissement au fil des périodes

Format : objet LookupResult. Quand found: true, retourne finess (vue DREES synthétique) + siret_timelines (1 entrée par SIRET candidat avec periodes chronologiques).

Coût : 1 RPC FINESS + 1 SELECT rpps + N appels DINUM + N appels INSEE en parallèle (N ≤ 5 typiquement). Pas de cache.

ParametersJSON Schema
NameRequiredDescriptionDefault
num_finessYesNuméro FINESS exact (9 chiffres).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnly, idempotent, etc.), the description discloses detailed behavioral traits: the specific data sources used (FINESS, RPPS, DINUM, INSEE), the cost in terms of API calls, the fact that it includes closed SIRETs, and that there is no cache. This adds significant transparency about what happens during execution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear first sentence stating the main purpose, followed by a version note, typical usage, output format, and cost. Although it is somewhat lengthy, every section adds valuable information and there is no redundancy. The information density is high.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and the presence of an output schema, the description covers the input, output format, behavior, typical uses, and cost. It does not explicitly address error conditions or edge cases, but the provided information is sufficient for an AI agent to correctly invoke the tool in typical scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the single parameter 'num_finess' as a 9-digit exact number with 100% coverage. The description does not add any additional semantics or constraints beyond what the schema provides, so it meets the baseline but does not exceed it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states that the tool reconstructs the complete timeline of a healthcare establishment by cross-referencing multiple data sources. It distinguishes itself from siblings like 'etablissement_by_finess' by focusing on historical changes rather than current state, and includes typical use cases that clarify its specific role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides three typical usage scenarios (post-merger history, exact closure date, rebranding cascade) that serve as clear guidance on when to use this tool. While it does not explicitly list alternatives or state when not to use it, the scenarios implicitly differentiate it from current-state tools. This is strong contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lister_specialites_ameliA
Read-onlyIdempotent
Inspect

Liste les codes spécialité Ameli effectivement présents en base, avec leur libellé natif, leur type_ps_code de rattachement et leur count. Triés par fréquence décroissante. Utile pour découvrir la nomenclature avant de filtrer un professionnels_in_radius ou professionnels_par_specialite_dept. Le champ libelle_clarifie désambigüise les libellés partagés par plusieurs codes (ex: "Médecin généraliste" regroupe les codes 01/22/23, "Chirurgien-dentiste" 19/53/54, "Psychiatre" 33/75, "Gynécologue / Obstétricien" 07/70/77/79). Format quand partagé : '{libelle} (code {code}, {count_compact})' (ex: "Médecin généraliste (code 01, 55K)"). Sinon identique à libelle. is_libelle_partage: true quand au moins 2 codes utilisent le même libellé — utiliser ce flag côté caller pour décider d'afficher le code à l'utilisateur. PÉRIMÈTRE : libéraux conventionnés UNIQUEMENT. HORS PÉRIMÈTRE : médecins exclusivement hospitaliers/salariés, biologistes médicaux salariés en LBM, anatomopathologistes hospitaliers, médecins du travail, médecine légale. Pour effectifs tous statuts, voir Annuaire Santé ANS (RPPS, esante.gouv.fr) — non couvert par ce serveur. Source : Annuaire santé Ameli (Assurance Maladie), MAJ hebdomadaire. Réutilisation soumise à l'art. L.1461-2 CSP — citer la source et la date de sync.

ParametersJSON Schema
NameRequiredDescriptionDefault
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint, openWorldHint, idempotentHint, and destructiveHint are all appropriately set. The description adds behavioral details such as cache behavior (5min côté serveur), output format for shared labels, and the flag 'is_libelle_partage'. It also explains source and legal constraints. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively long but well-structured with sections. It front-loads the main purpose and provides essential details. However, it contains detailed explanations (e.g., libelle_clarifie format) that could be shortened without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (listing specialties with structured output), the description is fully complete. It covers the output fields, purpose, usage context, perimeter, source, legal notice, and relationship to sibling tools. The presence of an output schema means return values need not be explained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one optional boolean parameter with a description in the schema itself (100% coverage). The tool description does not add further parameter semantics beyond what is already documented in the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description immediately states that the tool lists Ameli specialty codes present in the database, with their native label, type_ps_code, and count, sorted by descending frequency. It clearly distinguishes from siblings like 'lister_specialites_medicales' and 'lister_types_ps_ameli' by focusing on Ameli specialties with frequencies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Utile pour découvrir la nomenclature avant de filtrer un professionnels_in_radius ou professionnels_par_specialite_dept', providing clear when-to-use context. It also delineates the perimeter (libéraux conventionnés UNIQUEMENT) and out-of-scope cases, and suggests an alternative (Annuaire Santé ANS) for other statuses.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lister_specialites_medicalesA
Read-onlyIdempotent
Inspect

Liste les spécialités médicales (savoir_faire RPPS) avec leur libellé et le nombre de PS qui les portent. Tool d'aide à la découverte pour le LLM : avant d'appeler densite_professionnels_sante ou professionnels_rpps_par_dept avec un savoir_faire_code précis (ex 'SM02' Cardiologie), utiliser ce tool pour obtenir la liste exhaustive.

Filtre par défaut : profession_code='10' (Médecin) — retourne donc les spécialités médicales (cardiologie, dermato, gynéco, etc.). Passer profession_code pour énumérer les spécialités d'une autre profession (ex '60' Infirmier → spécialités IDE), ou null pour tous savoir_faire confondus.

Résultats triés par count_ps DESC (spécialités les plus représentées en premier). Source : RPPS / Annuaire Santé ANS (Supabase dump mensuel).

ParametersJSON Schema
NameRequiredDescriptionDefault
profession_codeNoCode profession ANS (TRE_R94). Default '10' (Médecin). Passer une string vide ou 'null' pour énumérer tous savoir_faire toutes professions confondues.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Les annotations indiquent readOnlyHint=true et idempotentHint=true. La description ajoute des détails comportementaux : tri par count_ps DESC, source RPPS/Annuaire Santé ANS (Supabase dump mensuel). Elle ne contredit pas les annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

La description est concise, bien structurée et front-loaded : elle commence par l'objectif, puis l'utilisation, et enfin les détails techniques. Chaque phrase apporte une information utile sans redondance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Pour un tool simple avec un paramètre optionnel et sans output schema, la description couvre tous les aspects nécessaires : objectif, usage contextuel, défaut, tri, source de données. C'est complet.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Le schéma de l'unique paramètre profession_code est bien documenté (description à 100%). La description ajoute des exemples concrets (ex '60' Infirmier) et précise le comportement avec une string vide ou 'null', ce qui renforce l'utilité.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

La description indique clairement que le tool liste les spécialités médicales avec leur libellé et le nombre de professionnels de santé, en précisant le contexte du RPPS. Elle se distingue des outils frères comme lister_specialites_ameli ou densite_professionnels_sante.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

La description fournit explicitement le cas d'usage : utiliser ce tool avant d'appeler densite_professionnels_sante ou professionnels_rpps_par_dept avec un savoir_faire_code précis. Elle explique le filtre par défaut et comment changer de profession.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lister_types_ps_ameliA
Read-onlyIdempotent
Inspect

Liste les codes type_ps Ameli présents en base, avec leur libellé natif (libelle_source), un libellé clarifié (libelle_clarifie) résolvant l'ambiguïté du code "2" fourre-tout, leur count total, et specialites_presentes (la liste effective des spécialités regroupées sous chaque type_ps avec leurs counts). Pas de dictionnaire inventé : la clarification est dérivée de la donnée live à chaque appel. PÉRIMÈTRE : libéraux conventionnés UNIQUEMENT. HORS PÉRIMÈTRE : médecins exclusivement hospitaliers/salariés, biologistes médicaux salariés en LBM, anatomopathologistes hospitaliers, médecins du travail, médecine légale. Pour effectifs tous statuts, voir Annuaire Santé ANS (RPPS, esante.gouv.fr) — non couvert par ce serveur. Source : Annuaire santé Ameli (Assurance Maladie), MAJ hebdomadaire. Réutilisation soumise à l'art. L.1461-2 CSP — citer la source et la date de sync.

ParametersJSON Schema
NameRequiredDescriptionDefault
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark the tool as readOnly, idempotent, and non-destructive. The description adds context that queries live data (not a dictionary) and discloses weekly update cadence and legal reuse conditions, reinforcing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: first sentence states core output, then clarifies live data, scope, alternatives, and source. Every sentence adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and annotations, the description sufficiently covers purpose, scope, alternatives, and data freshness. No missing essential information for effective agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'include_freshness' is fully described in the input schema. The description does not add additional parameter-level semantics beyond what the schema provides, so baseline score applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists Ameli type_ps codes with their labels, counts, and specialties. It explicitly distinguishes from related siblings like 'lister_specialites_ameli' by focusing on type_ps and provides scope boundaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when and when-not to use: only for liberal conventioned practitioners, excludes hospitalists and salaried biologists. It points to an alternative source for other statuses (Annuaire Santé ANS).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

population_par_communeA
Read-onlyIdempotent
Inspect

Population municipale (PMUN), population comptée à part (PCAP) et population totale (PTOT) d'une commune française par son code INSEE (5 caractères). Source : INSEE Melodi (DS_POPULATIONS_REFERENCE). PMUN est la base légale officielle utilisée pour les indicateurs DREES (densité médicale, etc.). Retourne un LookupResult discriminé par found. Si la commune a fusionné ou changé de code, found: false avec orientation vers autocomplete_commune.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesCode INSEE de la commune (5 caractères, ex: '75056' Paris, '13201' Marseille 1er, '2A004' Ajaccio).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly, idempotent, non-destructive), description discloses return structure (LookupResult with found discrimination), source, and error handling for merged/changed codes. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no redundant information. Efficient and clearly structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers return type, conditional behavior, and source. Output schema handles return fields. Could mention data recency, but overall complete for a lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already provides full description with examples (100% coverage). The tool description adds no further parameter-level details, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns three population metrics (PMUN, PCAP, PTOT) for a French commune by INSEE code, with source and official use noted. Distinguishes from siblings like get_commune_by_code and population_par_departement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (to get commune population) and what happens when the code is obsolete (found: false with redirection to autocomplete_commune), providing clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

population_par_departementA
Read-onlyIdempotent
Inspect

Population municipale (PMUN), comptée à part (PCAP) et totale (PTOT) d'un département français par son code INSEE (2-3 caractères). Source : INSEE Melodi (DS_POPULATIONS_REFERENCE). PMUN recommandée pour calculs de densité (méthodo DREES). Supporte la Corse (2A, 2B) et les DOM (971-976).

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesCode INSEE du département (2-3 caractères, ex: '75' Paris, '13' Bouches-du-Rhône, '2A' Corse-du-Sud, '971' Guadeloupe).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, destructiveHint=false. Description adds value by specifying the data source (INSEE Melodi) and methodology recommendation (DREES), complementing the annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key information (metrics, source, usage tip). Every sentence adds value; no filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description does not need to detail return values. It covers purpose, parameter usage, source, and recommendation, which is fully sufficient for this simple read-only tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers parameter 'code' at 100% with description. Description enriches it with concrete examples (75, 13, 2A, 971) and adds contextual validation (support for special codes), going beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns three population metrics (PMUN, PCAP, PTOT) for a French department by INSEE code. Distinguishes from sibling 'population_par_commune' by targeting departments, and provides specific usage guidance (e.g., PMUN for density).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly covers when to use (department lookup by code) and includes edge cases (Corse 2A/2B, DOM 971-976). Does not list alternatives, but sibling names imply context, and the description provides recommended metric (PMUN) for a common use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

professionnel_by_rppsA
Read-onlyIdempotent
Inspect

Fiche d'un professionnel de santé par identifiant national (rpps_id / IDNPS, 11 ou 12 chiffres — IDNPS modernes émis depuis 2020 ont un préfixe "81" qui les fait à 12 chars, anciens IDs sans préfixe à 11 chars). Renvoie N entrées quand le PS exerce sur plusieurs sites (1 row par site). Si non trouvé en base locale (ingestion mensuelle, J-30 max), tente automatiquement un fallback live sur l'API FHIR ANS (gateway.api.esante.gouv.fr/fhir/v2) — fraîcheur quotidienne, gratuit (clé ESANTE-API-KEY issue de portal.api.esante.gouv.fr requise côté serveur). Le champ source distingue db (base locale) de ans_fhir (fallback live). include_freshness n'affecte que les retours source: "db" (FHIR ANS étant live). Source : Annuaire Santé, Agence du Numérique en Santé (ANS) — Licence Ouverte v2.0

ParametersJSON Schema
NameRequiredDescriptionDefault
rpps_idYes
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond the annotations (readOnlyHint, idempotentHint, etc.). It details fallback logic to the FHIR ANS API with freshness specifications (monthly ingestion for db, daily for ans_fhir), the 'source' field behavior, and the 'include_freshness' parameter effects. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is comprehensive but not overly verbose. It front-loads the core purpose and ID format, then provides details on multiplicity, fallback, and parameters. Each sentence adds value, though it could be slightly more concise by grouping related technical details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-site, fallback, freshness parameter) and the presence of an output schema, the description covers all necessary aspects: ID format, behavior when multiple sites, fallback mechanism, source field, freshness parameter effect, API key requirement, and data license. It is complete and well-rounded.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the schema for the 'rpps_id' parameter by explaining the 11/12 character format and the '81' prefix for modern IDs. For 'include_freshness', it clarifies that it only affects 'source: db' results. Since schema description coverage is 50% (include_freshness has a schema description, rpps_id does not), the description compensates well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a healthcare professional's profile by national ID (rpps_id/IDNPS). It specifies the ID format (11-12 digits, with prefix '81' for modern IDs) and notes it returns multiple rows if the professional practices at multiple sites. This distinguishes it from sibling tools like 'rpps_search_by_name' (name-based search) and 'professionnels_rpps_in_radius' (geographic search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use: when you have an exact rpps_id and need the professional's details. It explains fallback behavior (local DB then FHIR ANS) and the source distinction. However, it does not explicitly state when not to use it (e.g., for name-based lookups use 'rpps_search_by_name') or provide alternative tools; but the context makes it clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

professionnels_in_radiusA
Read-onlyIdempotent
Inspect

Recherche de professionnels de santé libéraux conventionnés dans un rayon géographique. Précision géo : centroïde commune (~3 km en moyenne — adapté à l'analyse de densité, pas au géocodage adresse). Codes type_ps Ameli présents en base (3) : '1' médecins, '2' auxiliaires médicaux (fourre-tout : IDE, kinés, sages-femmes, podologues, orthophonistes, orthoptistes, IPA), '5' chirurgiens-dentistes. Pour cibler une profession précise (ex: IDE seuls, kinés seuls, podologues seuls), passer par specialite_codes plutôt que type_ps_codes qui ratisse plus large. Liste exhaustive des codes spécialité disponibles via le tool lister_specialites_ameli. Multi-sites : par défaut un PS exerçant sur N adresses apparaît N fois — utiliser dedupe_by_ps=true pour regrouper par praticien et lister les sites en sous-objet. Distance retournée en km vol d'oiseau (haversine PostGIS) — pour distance routière, croiser avec un service externe (OSRM, ORS). PÉRIMÈTRE : libéraux conventionnés UNIQUEMENT. HORS PÉRIMÈTRE : médecins exclusivement hospitaliers/salariés, biologistes médicaux salariés en LBM, anatomopathologistes hospitaliers, médecins du travail, médecine légale. Pour effectifs tous statuts, voir Annuaire Santé ANS (RPPS, esante.gouv.fr) — non couvert par ce serveur. Source : Annuaire santé Ameli (Assurance Maladie), MAJ hebdomadaire. Réutilisation soumise à l'art. L.1461-2 CSP — citer la source et la date de sync.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude du centre (WGS84).
lonYesLongitude du centre (WGS84).
limitNoNombre max de résultats (1-500, défaut 100). Appliqué AVANT déduplication.
radius_kmNoRayon en km (0.1-50, défaut 5).
dedupe_by_psNoRegrouper les entrées par praticien (nom + prénom + code spécialité) et lister chaque adresse d'exercice dans `sites[]`. Défaut false (comportement V0.4 historique : un PS multi-sites = N entrées).
type_ps_codesNoListe de codes type PS Ameli (3 valeurs présentes en base : '1' médecins, '2' auxiliaires médicaux fourre-tout — IDE/kinés/sages-femmes/podologues/orthophonistes/orthoptistes/IPA, '5' chirurgiens-dentistes). Pour cibler une seule profession, préférer `specialite_codes`. Si omis, tous types.
specialite_codesNoListe de codes spécialité Ameli (ex: ['01'] MG, ['03'] cardio). Si omis, toutes spécialités.
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly, openWorld, idempotent, non-destructive), the description adds key behavioral details: geolocation precision (~3 km), distance type (haversine), deduplication behavior, data source and update frequency (weekly), and copyright attribution. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: starts with a clear purpose sentence, then proceeds logically through precision, parameter usage, dedup, distance, perimeter, and source. Every sentence adds unique value with no redundancy. Though lengthy, it earns its length through completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Considering the 8 parameters, output schema presence, and complexity, the description covers all necessary context: geolocation precision, parameter interplay, perimeter inclusion/exclusion, data freshness, and linkage to sibling tools. It leaves no significant gap for the agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the description adds significant value by explaining the meaning of type_ps_codes (listing the three codes and their catch-all nature), dedupe_by_ps (default behavior and structure), include_freshness (effect on payload), and referencing sibling tool for specialite_codes. This goes well beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching for liberal health professionals (conventionnés) within a geographic radius. It specifies the resource (professionnels de santé libéraux conventionnés), distinguishes from sibling tools by mentioning the Ameli source and scope, and includes precise geolocation accuracy notes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: when to use specialite_codes over type_ps_codes, how dedupe_by_ps works, and when to use external services for driving distance. It also clearly defines the perimeter (libéraux conventionnés only) and lists what is not covered, directing users to alternative sources like RPPS.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

professionnels_par_specialite_deptA
Read-onlyIdempotent
Inspect

Liste des professionnels de santé libéraux conventionnés d'un département, avec filtres optionnels par spécialité ou type de PS. Pour énumération administrative — pas de rayon. Codes type_ps Ameli présents en base (3) : '1' médecins, '2' auxiliaires médicaux (fourre-tout : IDE, kinés, sages-femmes, podologues, orthophonistes, orthoptistes, IPA), '5' chirurgiens-dentistes. Pour cibler une profession précise (ex: IDE seuls), passer par specialite_code plutôt que type_ps_code qui ratisse plus large. Liste exhaustive des codes spécialité disponibles via le tool lister_specialites_ameli. Pagination : utiliser offset pour récupérer les pages suivantes quand truncated=true. Multi-sites : utiliser dedupe_by_ps=true pour regrouper par praticien. PÉRIMÈTRE : libéraux conventionnés UNIQUEMENT. HORS PÉRIMÈTRE : médecins exclusivement hospitaliers/salariés, biologistes médicaux salariés en LBM, anatomopathologistes hospitaliers, médecins du travail, médecine légale. Pour effectifs tous statuts, voir Annuaire Santé ANS (RPPS, esante.gouv.fr) — non couvert par ce serveur. Source : Annuaire santé Ameli (Assurance Maladie), MAJ hebdomadaire. Réutilisation soumise à l'art. L.1461-2 CSP — citer la source et la date de sync.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNombre max de résultats (1-500, défaut 100). Appliqué AVANT déduplication.
offsetNoDécalage de pagination (≥ 0, défaut 0). Combiner avec `limit` pour énumérer un département à fort effectif. Re-paginer tant que `truncated=true`.
departementYesCode département INSEE : 2 caractères métropole/Corse ('01'-'95', '2A'/'2B'), 3 caractères DOM ('971'-'978').
dedupe_by_psNoRegrouper les entrées par praticien (nom + prénom + code spécialité) et lister chaque adresse d'exercice dans `sites[]`. Défaut false.
type_ps_codeNoCode type PS Ameli ('1' médecins, '2' auxiliaires médicaux, '5' chirurgiens-dentistes). Optionnel — préférer `specialite_code` pour un ciblage précis. Liste complète via `lister_types_ps_ameli`.
specialite_codeNoCode spécialité Ameli (ex: '01' MG, '24' IDE, '26' kiné, '03' cardio). Optionnel. Liste complète via `lister_specialites_ameli`.
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, openWorldHint. The description adds operational context: source (Ameli), update frequency (weekly), legal reuse requirements, and pagination behavior with `truncated=true`. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized with clear sections, but somewhat lengthy. Each sentence adds value (scope, filters, pagination, deduplication, source, legal). Could be slightly more concise, but structure is good and information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 params, 1 required), the description is highly complete. It covers scope, out-of-scope, pagination, deduplication, source, legal note, and references a sibling tool for code lists. Output schema exists, so return values need not be described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with good descriptions. The description adds meaning beyond schema by explaining `type_ps_code` values, the distinction between type and specialty codes, how to use `offset` for pagination, and `dedupe_by_ps` for multi-sites. This justifies a slightly higher score than baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists liberal health professionals in a department with optional filters, distinguishes itself from radius-based tools, and contrasts with sibling tools like `lister_specialites_ameli`. The verb 'list' and resource 'professionnels de santé libéraux conventionnés' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when to use (administrative enumeration), when not to use (for hospital/salaried or all statuses), and alternatives (Annuaire Santé ANS, or `lister_specialites_ameli` for codes). Gives guidance on choosing `specialite_code` over `type_ps_code` and explains pagination with `offset`.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

professionnels_rpps_in_radiusA
Read-onlyIdempotent
Inspect

Recherche de professionnels de santé dans un rayon via le RPPS (Annuaire Santé ANS). À la différence de professionnels_in_radius (Ameli, libéraux conventionnés uniquement), cette recherche couvre tous les PS : libéraux, salariés (hospitaliers, salariés en cabinet), mixtes, remplaçants. Filtres : profession_codes (nomenclature ANS — ex: 10 Médecin, 60 Infirmier), savoir_faire_codes (spécialité fine DES/DESC), mode_exercice_codes. Codes mode_exercice ANS : L libéral, S salarié, M mixte, R remplaçant, B bénévole, A autre. Par défaut, ne renvoie que les PS de catégorie Civil (C) — droit privé : libéraux, salariés privés, hospitaliers contractuels, ~97 % de la base. Passer include_agents_publics: true pour inclure aussi les Agents publics (M) — fonctionnaires d'État + collectivités + militaires SSA, ~0,3 % (PH titulaires, médecins inspecteurs ARS, médecins conseils CNAM, médecins scolaires Éducation nationale, médecins PMI). Passer include_etudiants: true pour inclure aussi les Étudiants (E) — internes, externes, élèves IDE/SF, ~2,5 %. Source nomenclature : https://mos.esante.gouv.fr/NOS/TRE_R09-CategorieProfessionnelle/. Coords au centroïde commune (~3 km moyenne) — pour précision adresse, croiser num_finess retourné avec etablissement_by_finess. Source : Annuaire Santé, Agence du Numérique en Santé (ANS) — Licence Ouverte v2.0

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNombre max de résultats retournés (défaut serveur 100).
centerYesCentre du cercle de recherche (coordonnées WGS84).
radius_kmYesRayon en km (0.1-50).
profession_codesNoCodes profession ANS (ex: ['10'] Médecin, ['60'] Infirmier). Si omis, toutes professions.
include_etudiantsNo
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.
savoir_faire_codesNoCodes savoir-faire ANS (spécialités fines DES/DESC). Si omis, tous savoir-faire.
mode_exercice_codesNoCodes mode d'exercice ANS (libéral / salarié / mixte). Si omis, tous modes.
include_agents_publicsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds value by explaining coordinate accuracy (centroids ~3km), the need to cross-reference num_finess for precise addresses, and the default category plus optional includes (agents publics, étudiants). This goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long and detailed, which is justified given the complexity (9 parameters, multiple categories). However, it could be more concise; some details like percentages and source URLs, while useful, add length. The front-loading is good.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and the presence of an output schema, the description covers all critical aspects: differentiation, parameter defaults, coordinate caveats, data freshness opt-in, and data source. It leaves no major gaps for an AI agent to understand usage and limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 78% schema coverage, the description significantly enriches parameter meaning: examples for profession_codes, ANS codes for mode_exercice_codes, detailed explanation of include_agents_publics and include_etudiants with percentages. It also links to the nomenclature source, providing context the schema alone lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches for health professionals via RPPS, using the verb 'Recherche' and specifying the resource. It immediately distinguishes itself from the sibling 'professionnels_in_radius' by noting it covers all professionals (liberals, salaried, etc.), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this over the sibling, explaining the filtration difference. It details parameter usage (e.g., profession_codes, mode_exercice_codes) and defaults (only civil category). However, it does not explicitly state when not to use this tool beyond the sibling comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

professionnels_rpps_par_deptA
Read-onlyIdempotent
Inspect

Listing départemental de PS via RPPS (libéraux + salariés). Filtres optionnels : profession_code, savoir_faire_code, mode_exercice_code. Re-paginer via offset tant que truncated=true. Préférer professionnels_par_specialite_dept (Ameli) pour les libéraux conventionnés ; cet outil sert à compter ou lister les salariés / l'effectif total. Par défaut, ne renvoie que les PS de catégorie Civil (C) — droit privé : libéraux, salariés privés, hospitaliers contractuels, ~97 % de la base. Passer include_agents_publics: true pour inclure aussi les Agents publics (M) — fonctionnaires d'État + collectivités + militaires SSA, ~0,3 % (PH titulaires, médecins inspecteurs ARS, médecins conseils CNAM, médecins scolaires Éducation nationale, médecins PMI). Passer include_etudiants: true pour inclure aussi les Étudiants (E) — internes, externes, élèves IDE/SF, ~2,5 %. Source nomenclature : https://mos.esante.gouv.fr/NOS/TRE_R09-CategorieProfessionnelle/. Source : Annuaire Santé, Agence du Numérique en Santé (ANS) — Licence Ouverte v2.0

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNombre max de résultats par page (défaut serveur 100).
offsetNoOffset pour pagination (défaut 0). Re-paginer tant que `truncated=true`.
departementYesCode département INSEE (ex: '75', '2A', '2B', '971'). Métropole 2 caractères (Corse '2A'/'2B', pas '20'), DOM/TOM 3 caractères.
profession_codeNoCode profession ANS (ex: '10' Médecin, '60' Infirmier). Optionnel.
include_etudiantsNo
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.
savoir_faire_codeNoCode savoir-faire ANS (spécialité fine DES/DESC). Optionnel.
mode_exercice_codeNoCode mode d'exercice ANS (libéral / salarié / mixte). Optionnel.
include_agents_publicsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, openWorld, and idempotent hints. The description adds detailed behavioral context: default category Civil, optional inclusion of public agents and students, source of nomenclature, pagination details, and the include_freshness parameter. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is thorough and front-loaded with purpose and alternatives, but it is somewhat lengthy with detailed category breakdowns. Every sentence adds value, but could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (9 parameters, 1 required, output schema present), the description covers usage, filters, categories, pagination, and alternative tools completely. The output schema handles return values, so no need to detail them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 78% (high), and the schema already documents each parameter. The description reiterates the optional filters and pagination but does not add significant meaning beyond what the schema provides, so baseline scoring applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists health professionals by department via RPPS, specifies it includes both liberal and salaried professionals, and distinguishes from the sibling tool 'professionnels_par_specialite_dept' by noting its specific use case for counting or listing salaried/total workforce.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises preferring 'professionnels_par_specialite_dept' for conventioned liberals, and states this tool is for counting or listing salaried/total workforce. It also explains optional filters and default inclusion categories, providing clear when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reconcilier_finess_sireneA
Read-onlyIdempotent
Inspect

Croise FINESS DREES ↔ SIRENE INSEE V3.11 et calcule un score de cohérence (Sørensen-Dice sur bigrammes) pour chaque SIRET candidat. Utile pour confirmer/infirmer un appariement num_finess ↔ SIRET avant prospection ou cross-check qualité.

Logique :

  1. Récupère FINESS (raison sociale + adresse libellée)

  2. Récupère SIRET candidats via la table RPPS

  3. Pour chaque SIRET, lookup SIRENE puis calcule 3 sous-scores :

    • nom : Dice sur raison sociale (FINESS vs SIRENE.uniteLegale)

    • adresse : Dice sur adresse complète

    • telephone : binaire 0/1 (toujours 0 actuellement : SIRENE n'expose pas le tel)

  4. Score global = pondération (nom 0.5, adresse 0.4, tel 0.1)

  5. Verdict brut : match (≥0.8) / partial (0.5..0.8) / mismatch (<0.5)

Algorithme PUBLIC (Sørensen-Dice est dans la littérature depuis 1948). Aucune valeur ajoutée Unilabs ici — c'est une primitive ouverte. La connaissance propriétaire (mapping enseignes ↔ SELAS) reste côté Geo Intel.

Format : objet LookupResult. Quand found: true, retourne { num_finess, candidates, skipped } :

  • candidates : tableau trié par score_global décroissant (meilleur match en premier)

  • skipped : SIRET candidats qu'on n'a PAS pu réconcilier (lookup SIRENE rejected ou not_found) avec la reason. Permet au caller de distinguer 'aucun SIRET candidat trouvé' (found: false LookupResult.not_found) de 'N SIRETs candidats mais tous rejetés par SIRENE' (candidates: [] + skipped: [...]).

ParametersJSON Schema
NameRequiredDescriptionDefault
num_finessYesNuméro FINESS exact (9 chiffres).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant context beyond the annotations, such as the public nature of the algorithm, the weighting of sub-scores, and the detailed output structure including the 'skipped' field. It does not contradict the annotations (readOnlyHint, idempotentHint, etc.).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is thorough but somewhat lengthy, containing multiple paragraphs and bullet points. While well-structured, it could be more concise. However, the detail is warranted given the complexity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (context signals indicate 'Has output schema: true'), the description covers all essential aspects: algorithm, scoring logic, output fields (candidates, skipped, score_global), and differentiation between no candidates and rejected candidates. It is complete for the agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'num_finess' is well-documented in the schema (9-digit exact number). The description provides context on how it is used (to retrieve FINESS data) but does not add new semantic information beyond what the schema already covers. Schema coverage is 100%, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: cross-referencing FINESS and SIRENE data to compute a coherence score for candidate SIRETs. It also explains the use case (confirmation/infirmation before prospection) and distinguishes itself from sibling tools by detailing its unique scoring algorithm.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool (before prospection or quality cross-check) and clarifies the meaning of the 'skipped' field to differentiate scenarios. However, it does not explicitly mention when not to use it or list direct alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reverse_geocodeA
Read-onlyIdempotent
Inspect

Géocodage inverse : à partir de coordonnées GPS, retrouve l'adresse la plus proche. Source : IGN Géoplateforme.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude (WGS84).
lonYesLongitude (WGS84).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, open-world, idempotent, and non-destructive traits. Description adds the data source but no additional behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with source, no wasted words. Perfectly concise for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema; description mentions 'nearest address' but not its structure. For a straightforward reverse geocode, partially adequate but could specify address fields or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema provides full descriptions for both parameters ('Latitude (WGS84)', 'Longitude (WGS84)'). Description adds no extra parameter meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs reverse geocoding from GPS coordinates to nearest address, naming the source. It effectively distinguishes itself from sibling tools like geocode_adresse.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, though the context implies it for reverse geocoding. Lack of when-not conditions or alternative references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rpps_dans_etablissementA
Read-onlyIdempotent
Inspect

Liste les professionnels de santé rattachés à un établissement FINESS (par numéro FINESS site, 9 chiffres). C'est le pivot RPPS↔FINESS — répond à "qui travaille dans ce labo / hôpital / clinique ?". Le mode_exercice distingue les libéraux exerçant sur place (vacations) des salariés. Couverture : RPPS expose ce lien quand le PS l'a déclaré ; salariés CH/CHU/cliniques bien couverts. Par défaut, ne renvoie que les PS de catégorie Civil (C) — droit privé : libéraux, salariés privés, hospitaliers contractuels, ~97 % de la base. Passer include_agents_publics: true pour inclure aussi les Agents publics (M) — fonctionnaires d'État + collectivités + militaires SSA, ~0,3 % (PH titulaires, médecins inspecteurs ARS, médecins conseils CNAM, médecins scolaires Éducation nationale, médecins PMI). Passer include_etudiants: true pour inclure aussi les Étudiants (E) — internes, externes, élèves IDE/SF, ~2,5 %. Source nomenclature : https://mos.esante.gouv.fr/NOS/TRE_R09-CategorieProfessionnelle/. Source : Annuaire Santé, Agence du Numérique en Santé (ANS) — Licence Ouverte v2.0

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
num_finessYes
include_etudiantsNo
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.
include_agents_publicsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint, openWorldHint, idempotentHint, and destructiveHint false. The description adds context beyond annotations: coverage limitations (RPPS link declared by professional, salaried well covered), default exclusion of agents publics and étudiants, source and license info. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately long but well-structured in French. It front-loads the core purpose and usage, then details coverage, defaults, and optional parameters. Every sentence contributes value, though minor repetition could be trimmed. Overall efficient for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters and an existing output schema (not shown), the description provides comprehensive context: purpose, usage, coverage, source, default behavior, and optional filters. It does not explain the output schema, but that is separate. It sufficiently aids selection and correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 5 parameters, but only 'include_freshness' has a description (coverage 20%). The description explains the boolean parameters 'include_agents_publics' and 'include_etudiants' with percentages and categories, adding meaning beyond the schema. However, 'limit' and 'num_finess' lack description, and 'mode_exercice' mentioned in description is not a parameter. The description partially compensates for low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists healthcare professionals attached to a FINESS establishment using a 9-digit number. It identifies itself as the pivot between RPPS and FINESS, answering 'who works in this lab/hospital/clinic?'. This distinguishes it from sibling tools like 'etablissement_by_finess' (establishment info) and 'professionnel_by_rpps' (single professional).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('répond à "qui travaille dans ce labo / hôpital / clinique ?"') and provides detailed context: coverage details, default filtering to civil category, and optional inclusion of agents publics and étudiants with percentages. It does not explicitly mention when not to use or compare to alternatives, but the usage context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rpps_search_by_nameA
Read-onlyIdempotent
Inspect

Recherche fuzzy de professionnels de santé par identité (nom + prénom optionnel + département optionnel). Utilise un matching trigram (pg_trgm) tolérant aux accents, typos et variations d'orthographe. Tri par pertinence décroissante. Source : RPPS / Annuaire Santé ANS (Supabase dump mensuel).

Usage typique : "trouve-moi le Dr Martin à Paris" (nom obligatoire, prénom et département facultatifs pour affiner). Sans département, recherche nationale (peut renvoyer beaucoup d'homonymes — utiliser le match_score pour trier).

Format de retour : objet { count, truncated, results, query_metadata } aligné sur les autres tools RPPS de listing. Chaque résultat porte un champ match_score ∈ [0..1] (score trigram pg_trgm). Un score < 0.5 indique souvent une homonymie partielle à confirmer côté caller.

Par défaut, ne renvoie que les PS de catégorie Civil (C) — droit privé : libéraux, salariés privés, hospitaliers contractuels, ~97 % de la base. Passer include_agents_publics: true pour inclure aussi les Agents publics (M) — fonctionnaires d'État + collectivités + militaires SSA, ~0,3 % (PH titulaires, médecins inspecteurs ARS, médecins conseils CNAM, médecins scolaires Éducation nationale, médecins PMI). Passer include_etudiants: true pour inclure aussi les Étudiants (E) — internes, externes, élèves IDE/SF, ~2,5 %. Source nomenclature : https://mos.esante.gouv.fr/NOS/TRE_R09-CategorieProfessionnelle/.

Source : Annuaire Santé, Agence du Numérique en Santé (ANS) — Licence Ouverte v2.0

ParametersJSON Schema
NameRequiredDescriptionDefault
nomYesNom de famille (obligatoire, non vide).
limitNoNombre max de résultats (1-500, défaut 100).
prenomNoPrénom (optionnel — affine le score si fourni).
departementNoCode département INSEE (ex: '75', '2A', '2B', '971'). Métropole 2 caractères (Corse '2A'/'2B', pas '20'), DOM/COM 3 caractères. Optionnel.
include_etudiantsNo
include_freshnessNoSi true, ajoute un champ `data_freshness` au payload (dans `query_metadata` si présent, sinon à la racine) listant la dernière ingestion réussie par source (FINESS, Ameli, RPPS) avec `staleness_days`. Opt-in pour ne pas alourdir les payloads par défaut. Cache 5min côté serveur — coût négligeable.
include_agents_publicsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
countYesNombre d'entrées retournées dans `results` (post-troncature).
resultsYesEntrées métier (shape spécifique au tool, cf. description du tool).
freshnessNoFraîcheur des sources (présent si `include_freshness: true`).
truncatedNotrue si le total réel dépasse `limit` (re-paginer via `offset` si supporté). Optional sur les tools de listing exhaustif (lister_*).
query_metadataNoMetadata de la query (radius_km, departement, filtres appliqués, …).
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark as readOnly, idempotent, non-destructive. Description adds: fuzzy matching engine (pg_trgm), tolerance details, sorting by relevance, default category filtering with percentages, return format with match_score field, and opt-in freshness. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear paragraphs covering search behavior, usage, return format, category details, and source. Slightly verbose but every sentence adds value. Front-loaded with essential purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema, the description covers all needed aspects: parameters, default behaviors, category filtering, return format, source citation, and license. No gaps for a complex search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 71% (5/7 params described in schema). Description adds value for undocumented parameters (include_etudiants, include_agents_publics) by explaining their effect and referencing official nomenclature. Also provides examples for departement codes. Lacks explicit mention of all params but overall enhances understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs fuzzy search of healthcare professionals by identity (nom + optional prenom and departement), using trigram matching tolerant to accents and typos. It distinguishes from sibling tools like professionnel_by_rpps (exact ID lookup) or professionnels_in_radius (location-based).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides typical usage example ('find Dr Martin in Paris'), explains that without département results can be numerous and recommends using match_score for disambiguation. Also clarifies default category filtering (Civil only) and how to include other categories via parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verifier_site_actifA
Read-onlyIdempotent
Inspect

Vérifie si un établissement de santé FINESS est encore en activité en croisant FINESS DREES ↔ RPPS (pivot SIRET) ↔ DINUM (liste complète des SIRET du SIREN, incluant les fermés). Détecte les SIRET fermés encore listés actifs côté FINESS (DREES a 1-2 mois de retard).

V0.7.0 — breaking : pivot SIRET élargi. Avant V0.7.0, on ne testait que les SIRET RPPS-déclarés (= SIRET du siège employeur typiquement) → on ratait le SIRET physique fermé du site. Désormais, le resolver récupère TOUS les SIRET du SIREN via DINUM puis fuzzy-matche leur adresse contre FINESS — ce qui capte aussi les SIRET fermés invisibles côté RPPS.

Logique :

  1. Lookup FINESS pour récupérer raison sociale + adresse + téléphone DREES

  2. Récupération des SIRET candidats via le resolver (RPPS + DINUM avec scoring d'adresse Dice)

  3. best_match = SIRET avec le meilleur score d'adresse ≥ 0.6 (= site physique)

  4. 2 verdicts distincts :

  • verdict_site (actif / ferme / indetermine) : basé sur best_match.actif. C'est le verdict qui compte pour un audit territorial.

  • verdict_groupe (actif / ferme / indetermine) : basé sur l'état admin de l'UL parente (champ actif DINUM). Une UL active peut très bien avoir un site fermé.

Format de retour : objet LookupResult discriminé par found. Quand found: true, le payload contient finess (vue DREES), candidates (liste enrichie tri score), best_match, sirens_explored, verdict_site, verdict_groupe, explication. Quand num_finess est absent de FINESS DREES, le tool retourne {found: false, lookupStatus: 'not_found', message, ...}.

Coût : 1 RPC FINESS + 1 SELECT rpps + N appels DINUM (N = nombre de SIREN distincts, typiquement 1). DINUM gère son propre fallback INSEE V3.11 pour les SIREN diffusion partielle.

ParametersJSON Schema
NameRequiredDescriptionDefault
num_finessYesNuméro FINESS exact (9 chiffres).

Output Schema

ParametersJSON Schema
NameRequiredDescription
keyNoClé recherchée (SIREN, num_finess, code INSEE, …).
foundYes
messageNoExplication actionnable quand `found=false` (cause probable + remédiation).
lookupStatusYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description provides extensive behavioral context beyond annotations: cost (RPC calls), version changes (V0.7.0 breaking), logic steps, two distinct verdicts, and data freshness delay (1-2 months). Annotations already indicate read-only, idempotent, and non-destructive, and the description adds valuable detail without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (version, logic, return format) but is somewhat verbose for an AI agent. It could be more concise while retaining essential details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and presence of an output schema, the description is comprehensive: it explains the algorithm, data sources, multiple verdicts, edge cases (not found), and cost. No gaps identified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already describes the parameter (num_finess) with 100% coverage. The description adds some context about the parameter's usage in the logic but does not provide significant additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to verify if a FINESS healthcare establishment is still active by cross-referencing multiple data sources. It distinguishes itself from sibling tools by focusing on activity status rather than just retrieving establishment data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. It implies usage for activity verification but lacks guidance on exclusions or when another tool would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.