justicelibre
Server Details
Free access to ~1M French administrative court decisions via MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Dahliyaal/justicelibre
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 27 of 27 tools scored. Lowest: 2.9/5.
Most tools are clearly distinct by jurisdiction or source (e.g., get_cc_decision vs. get_ce_decision), with well-defined boundaries. However, there is some overlap between search_admin_recent, search_admin_recent_all_caa, and search_admin_recent_all_ta, which all retrieve recent administrative decisions but differ in scope, potentially causing confusion about which to use for a given query.
Tool names follow a highly consistent snake_case pattern with clear verb_noun structures (e.g., get_decision_text, search_admin, list_juridictions). The naming is uniform across all 27 tools, making it easy to predict functionality and distinguish between retrieval (get_), search (search_), and listing (list_) operations.
With 27 tools, the count is on the higher side for a legal research server, but it covers multiple jurisdictions (e.g., CC, CE, CEDH, CJUE) and sources (e.g., admin, judiciaire, legi). While comprehensive, it may feel heavy and could benefit from consolidation, such as merging some of the recent search variants, to reduce cognitive load without losing functionality.
The tool set provides complete coverage for legal research, including retrieval (get_), search (search_), and listing (list_) operations across all major French and European legal sources. It supports CRUD-like workflows (e.g., search then get decision text) and includes cross-referencing tools like search_decisions_citing, with no apparent gaps in the domain's core functionalities.
Available Tools
30 toolsabout_justicelibreAInspect
Vue d'ensemble du protocole JusticeLibre : cartographie des sources et règles d'acheminement.
Appeler cet outil en priorité pour appréhender la matrice de
compatibilité des identifiants, les périmètres de recherche de chaque
juridiction, et les spécificités des bases de données exploitées.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that this is a read-only overview tool (implied by 'Vue d'ensemble' and understanding purposes) and suggests it's a starting point, but lacks details on behavioral traits like rate limits, error handling, or response format. With no annotations, it adds some context but not rich behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: it starts with the core purpose, then provides specific usage guidance. Both sentences earn their place by clarifying the tool's role and when to invoke it, with no wasted words or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations, but has an output schema), the description is reasonably complete. It explains the tool's purpose and usage context effectively. Since an output schema exists, the description doesn't need to detail return values, but it could slightly enhance completeness by mentioning what the overview includes (e.g., tool names, capabilities).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100% (though empty). The description doesn't need to add parameter details, so it meets the baseline of 4 for zero-parameter tools. It appropriately focuses on the tool's purpose without redundant parameter explanations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to provide an overview of the MCP justicelibre, including covered sources and guidance on when to use which tool. It specifies the verb 'understand' and resource 'available tools, their strengths, and ID compatibility matrix.' However, it doesn't explicitly differentiate this tool from its siblings (like 'list_juridictions'), which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Appelle ce tool en premier' (call this tool first) to understand which tools are available and their strengths. It provides clear context for initial exploration and implies alternatives are the other tools listed, making it highly actionable for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
build_source_urlAInspect
Construit l'URL canonique d'un document à partir de son identifiant.
Utile pour vérifier les sources à la main sur le site officiel (Légifrance,
Conseil constitutionnel, EUR-Lex, HUDOC, etc.) ou pour inclure un lien
cliquable dans un courrier.
Identifiants reconnus :
- `LEGIARTI*` → Légifrance article (passer `legitext` du texte parent
pour distinguer code (/codes/) vs loi non codifiée (/loda/)
- `LEGITEXT*` / `JORFTEXT*` → texte entier Légifrance
- `JURITEXT*` / `CONSTEXT*` / `CETATEXT*` → décisions Légifrance
- CELEX (`6XXXXCJXXXX`) → EUR-Lex (CJUE)
- `ECLI:*` → EUR-Lex deeplink
- itemid HUDOC (`001-XXXXXX`) → Cour EDH
- ArianeWeb (`/Ariane_Web/AW_DCE/|XXXXXX`) → conseil-etat.fr
Args:
identifier: l'ID à convertir
legitext: (optionnel) LEGITEXT du texte parent si `identifier` est un
LEGIARTI — améliore la précision de l'URL (codes/ vs loda/)
date: (optionnel, YYYY-MM-DD) — appendé à l'URL Légifrance pour pointer
vers la version de l'article en vigueur à cette date
(ex: `/loda/article_lc/LEGIARTI.../2023-01-01`). Indispensable pour
vérifier l'état du droit à une date historique, sinon Légifrance
affiche la version courante même si l'article a été abrogé depuis.
Returns:
`{"id", "source_url"}` ou `{"error"}` si format non reconnu.| Name | Required | Description | Default |
|---|---|---|---|
| date | No | ||
| legitext | No | ||
| identifier | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well. It discloses the tool's behavior: mapping specific identifier patterns to URLs, handling optional parameters for precision, and returning either a successful result or error object. It doesn't mention rate limits or authentication needs, but covers core functionality adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and appropriately sized. Front-loaded with purpose statement, followed by usage context, detailed identifier examples, parameter explanations, and return format. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a URL construction tool. With output schema present, the description appropriately focuses on input parameters and behavior rather than return values. It covers all necessary context: purpose, usage, parameter semantics, identifier mapping rules, and error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both parameters thoroughly. It defines 'identifier' as the ID to convert with extensive examples of recognized formats, and explains 'legitext' as optional parent text for LEGIARTI identifiers to improve URL precision between codes vs non-codified laws.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: constructing canonical URLs from document identifiers for specific legal sources. It uses specific verbs ('construit', 'vérifier', 'inclure') and distinguishes itself from sibling tools by focusing on URL generation rather than document retrieval or search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool: for verifying sources manually on official sites or including clickable links in correspondence. It distinguishes from sibling tools by not retrieving document content but generating URLs, and provides clear examples of identifier types it handles.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_admin_decisionAInspect
Récupère une décision administrative par son numéro de requête exact.
Couvre toutes les juridictions : Conseil d'État, cours administratives
d'appel (CAA), tribunaux administratifs (TA). Utilise un lookup SQL exact
sur le champ `numero` — pas de FTS5, pas de faux positifs.
⚠️ **Désambiguïsation indispensable** : un même numéro à 7 chiffres
(ex: 2200433) est partagé par 24+ tribunaux administratifs différents
(chaque TA a sa propre série annuelle qui repart à 1). Sans `juridiction`,
tu obtiens un homonyme au hasard parmi 24 — souvent pas le bon. **Si tu
sais quelle juridiction a rendu la décision, passe-la TOUJOURS.**
Args:
numero: numéro de requête (ex : "2200433", "2116343", "497566")
juridiction: identifiant de la juridiction. **Recommandé pour tout
numéro à 7 chiffres** (TA/CAA codifié). Deux formats acceptés
(mapping bidirectionnel automatique) :
- **Code court** (recommandé pour les LLMs) : "TA69" (Lyon),
"TA75" (Paris), "CAA69", "CE", "CE-CAA"
- **Nom long** : "Tribunal Administratif de Lyon", "Conseil d'Etat"
(avec ou sans accent), match insensible à la casse
Note : "Lyon" seul est ambigu (TA Lyon ou CAA Lyon) — préférer
le code court ou le nom complet pour éviter la collision.
Returns:
Décision avec métadonnées (id, juridiction, numero, date, titre),
ou `{"error": "introuvable"}` si aucun résultat dans JADE.
Exemples :
get_admin_decision("2200433", juridiction="Tribunal Administratif de Lyon")
→ DTA_2200433_20230214 (TA Lyon, 14 fév 2023, RSA dérogatoire)
get_admin_decision("473286") # CE n'a pas de doublon, juridiction inutile
→ DCE_473286_20231123 (CE, non-admission du pourvoi sur la précédente)| Name | Required | Description | Default |
|---|---|---|---|
| numero | Yes | ||
| juridiction | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains the exact SQL lookup (no FTS5, no false positives) and describes the return format including error response. Could mention authentication requirements or rate limits, but overall good transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured docstring: one-line summary, then usage context, then Args and Returns. Every sentence adds value, no fluff. Front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (2 params, exact lookup) and existence of an output schema, the description covers purpose, parameters, return fields, and jurisdiction scope. Could optionally mention uniqueness guarantee or more output schema details, but sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, but description adds rich meaning: examples for numero ('2116343') and jurisdiction ('Conseil d'Etat') and clarifies that leaving jurisdiction empty searches all. This fully compensates for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb 'Récupère' (retrieve), the resource 'décision administrative', and the method 'par son numéro de requête exact'. It distinguishes from siblings like search_admin by highlighting the exact lookup and better reliability for short numbers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Clear guidance: 'À utiliser quand on connaît le numéro précis' with an example. Explicitly contrasts with search_admin for full-text cases, effectively telling when to use this tool vs alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cc_decisionAInspect
Récupère une décision du Conseil constitutionnel par son numéro.
Format attendu : "AA-NNN NATURE" ou juste "AA-NNN" (ex : "79-105 DC",
"2020-800 DC", "2023-1048 QPC"). Recherche full-text sur le numéro
+ filtre juridiction="Conseil constitutionnel" dans judiciaire.db.
Args:
numero: numéro de décision CC (ex : "79-105 DC")
nature: filtre optionnel (QPC, DC, L, etc.) — cf search_cc
Returns:
`{id, titre, date, juridiction, nature, ecli, text}` ou None.| Name | Required | Description | Default |
|---|---|---|---|
| nature | No | ||
| numero | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by explaining the search behavior ('Recherche full-text sur le numéro + filtre juridiction="Conseil constitutionnel"'), data source ('dans judiciaire.db'), and return format (including the possibility of 'None'). It does not mention error handling or performance limits, but covers key operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by format details, behavior, parameters, and return value. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no annotations, but has output schema), the description is complete. It explains the purpose, usage, behavior, parameters, and return format, with the output schema handling return value details. No significant gaps remain for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds significant meaning beyond the schema by explaining the expected format for 'numero' (with examples like '79-105 DC'), clarifying that 'nature' is optional with examples (QPC, DC, L), and referencing search_cc for more details. This effectively documents both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Récupère une décision du Conseil constitutionnel') and resource ('par son numéro'), distinguishing it from sibling tools like search_cc by focusing on retrieval by identifier rather than broader search. The French phrasing is precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (retrieval by decision number) and references an alternative ('cf search_cc'), but does not explicitly state when not to use it or compare it to other sibling tools like get_decision_judiciaire. The guidance is helpful but not exhaustive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ce_decisionAInspect
Récupère une décision du Conseil d'État par son numéro de pourvoi.
Essaie d'abord le bulk JADE DILA (lookup SQL exact), puis si introuvable
tente ArianeWeb Sinequa — les deux bases ont des couvertures complémentaires.
Pour retrouver une décision via identifiant DCE_*, utiliser
`get_decision_text` à la place.
Args:
numero: numéro de pourvoi (ex : "497566", "358109")
Returns:
Décision avec métadonnées, ou None si introuvable dans les deux bases.| Name | Required | Description | Default |
|---|---|---|---|
| numero | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it performs a query in a specific database ('jade.db'), filters by jurisdiction ('Conseil d'État'), returns either the complete decision with full text or None if not found, and specifies the exact input format. However, it doesn't mention potential limitations like rate limits, authentication needs, or error handling beyond 'None si introuvable'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise. It starts with the core purpose, immediately provides a concrete example, explains the data source and filter, gives clear usage guidance with an alternative, and ends with parameter and return value documentation. Every sentence earns its place with no wasted words, and information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter query), no annotations, but with an output schema present, the description is complete enough. It covers purpose, usage guidelines, parameter semantics with examples, behavioral context (database source, jurisdiction filter, return behavior), and explicitly references the alternative tool. The output schema handles return value details, so the description appropriately focuses on contextual information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must fully compensate. It does so excellently: it explains the 'numero' parameter as the 'numéro de pourvoi', provides a concrete example ('497566'), specifies the required format ('format numérique pur sans séparateur'), and clarifies what this identifier represents versus alternatives (DCE_* identifiers). This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Récupère') and resource ('une décision du Conseil d'État'), and distinguishes it from sibling tools by specifying it's for decisions by 'numéro de pourvoi' rather than DCE_* identifiers. It explicitly mentions the jurisdiction filter ('Conseil d'État') and data source ('jade.db'), making the scope unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (for decisions by 'numéro de pourvoi') and when to use an alternative ('get_decision_text' for DCE_* identifiers). It also clarifies the expected input format ('format numérique pur sans séparateur') and the jurisdiction context, giving clear boundaries for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_cedhBInspect
Extraction du texte intégral d'une décision de la Cour européenne des droits de l'homme sur la base de son identifiant système (itemid HUDOC).
Args:
decision_id: itemid HUDOC (ex : "001-249914")| Name | Required | Description | Default |
|---|---|---|---|
| decision_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states it retrieves full text, implying a read-only operation, but doesn't disclose behavioral traits like authentication needs, rate limits, error handling, or response format. For a tool with no annotations, this leaves significant gaps in understanding how it behaves beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a concise Args section with an example. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values) and low complexity (single parameter, no nested objects), the description is somewhat complete but lacks details on behavioral aspects like authentication or errors. With no annotations, it should do more to compensate, but the output schema reduces the need for return value explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter 'decision_id', explaining it's a HUDOC itemid with an example ('001-249914'), which clarifies the expected format beyond the schema's minimal title. With 0% schema description coverage and only one parameter, this compensates well, though it could note constraints like valid patterns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Récupère' (retrieves) and the resource 'texte intégral d'une décision CEDH' (full text of a CEDH decision), specifying it works from a HUDOC itemid. It distinguishes from siblings like search_cedh (which searches) and get_decision_cjue (which retrieves different court decisions). However, it doesn't explicitly contrast with get_decision_text, which might retrieve different types of decisions, leaving slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a specific HUDOC itemid for a CEDH decision, but it doesn't explicitly state when to use this vs. alternatives like search_cedh (for finding decisions without an ID) or get_decision_cjue (for different court decisions). It provides context (using HUDOC itemid) but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_cjueAInspect
Extraction du texte intégral d'une décision de la Cour de justice de l'Union européenne sur la base de son identifiant normalisé (CELEX).
Args:
decision_id: identifiant CELEX (ex : "62024CJ0072") ou ECLI| Name | Required | Description | Default |
|---|---|---|---|
| decision_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the core behavior (retrieving full text from an identifier) and gives format examples for the ID, but doesn't mention error handling, authentication needs, rate limits, or what happens with invalid IDs. It provides basic operational context but lacks deeper behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core purpose in the first sentence, followed by a concise parameter explanation with an example. Every sentence earns its place with no wasted words, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter), no annotations, but an output schema exists (so return values are documented elsewhere), the description is reasonably complete. It covers the purpose, parameter semantics, and usage context. However, for a retrieval tool with no annotations, it could benefit from mentioning potential errors or response formats to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage (no parameter descriptions in schema), so the description must compensate. It adds crucial meaning: it explains that decision_id is a CELEX or ECLI identifier and provides an example ('62024CJ0072'). This clarifies the parameter's purpose and format beyond the bare schema, though it could specify if both CELEX and ECLI are always accepted.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Récupère le texte intégral' - retrieves the full text) and resource ('d'une décision CJUE' - of a CJUE decision), distinguishing it from siblings like search_cjue (which searches) and get_decision_text (which might handle different decision types). It precisely identifies what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use it: when you have a CELEX or ECLI identifier and need the full text of a CJUE decision. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools (e.g., search_cjue for searching without an ID).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_judiciaireCInspect
Extraction du texte intégral d'une décision judiciaire via l'API restreinte PISTE (authentification OAuth2 requise).
À substituer systématiquement par `get_decision_judiciaire_libre`
lorsque la décision figure dans les archives ouvertes de la DILA.
Outil formellement inopérant pour les décisions relevant de l'ordre
administratif (formats `DCE_*`, `DTA_*`, `DCAA_*`, `/Ariane_Web/...`).
Args:
decision_id: identifiant Judilibre de la décision
session_token: jeton justicelibre temporaire (recommandé)
client_id: Client ID PISTE (alternative)
client_secret: Client Secret PISTE (alternative)| Name | Required | Description | Default |
|---|---|---|---|
| client_id | No | ||
| decision_id | Yes | ||
| client_secret | No | ||
| session_token | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication requirements (session_token or client_id/client_secret) which is useful, but doesn't describe other behavioral traits like rate limits, error handling, response format, or whether it's a read-only operation. The description is minimal and leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with a clear purpose statement followed by parameter explanations. The structure is front-loaded with the main purpose. However, the parameter explanations could be more efficiently integrated, and there's some redundancy in describing authentication alternatives.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 4 parameters with 0% schema coverage, and no annotations, the description provides basic purpose and parameter semantics but lacks usage guidance and comprehensive behavioral context. It's minimally adequate but has clear gaps for a tool with authentication requirements and multiple parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides brief explanations for all 4 parameters (decision_id, session_token, client_id, client_secret), adding meaning beyond the schema's titles. However, the explanations are minimal (e.g., 'identifiant de la décision Judilibre' doesn't explain format or source), and it doesn't clarify the relationship between authentication parameters or why some are alternatives.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Récupère' (retrieves) and the resource 'texte intégral d'une décision judiciaire' (full text of a judicial decision). It specifies what the tool does but doesn't explicitly differentiate from sibling tools like get_decision_judiciaire_libre or get_decision_text, which likely have similar purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_decision_judiciaire_libre or get_decision_text. It mentions authentication alternatives (session_token vs client_id/client_secret) but doesn't explain the context for choosing between them or when to use this tool over other search tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_judiciaire_libreAInspect
Extraction du texte intégral d'une décision judiciaire depuis l'index indépendant (sans authentification).
Accepte exclusivement les identifiants judiciaires libres (formats
`JURITEXT*`, `CONSTEXT*`, `JURI*`), tels que retournés par
`search_judiciaire_libre` (exemples : `"JURITEXT000042579700"`,
`"CONSTEXT000049574021"`).
Outil formellement inopérant pour les décisions relevant de l'ordre
administratif (formats `DCE_*`, `DTA_*`, `DCAA_*`, `/Ariane_Web/...`).
Args:
decision_id: identifiant JURITEXT/JURI/CONSTEXT de la décision| Name | Required | Description | Default |
|---|---|---|---|
| decision_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: no authentication required ('SANS authentification'), the tool retrieves full text ('texte intégral'), and it expects a specific ID format (JURITEXT). However, it doesn't mention rate limits, error conditions, or response format details, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly front-loaded with the core purpose in the first sentence. The second sentence clarifies the ID source and format. The Args section is brief and directly relevant. Every sentence earns its place with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool with an output schema (which handles return values), the description is nearly complete. It covers purpose, usage, parameter semantics, and key behavioral aspects. The main gap is lack of error handling or performance characteristics, but given the output schema exists, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It clearly explains the single parameter 'decision_id': identifies it as a JURITEXT ID, provides an example format ('JURITEXT000042579700'), and specifies it comes from search_judiciaire_libre. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Récupère le texte intégral' - retrieves full text) and resource ('d'une décision judiciaire' - of a judicial decision), distinguishing it from siblings like search_judiciaire_libre (which finds decisions) and get_decision_text (which might retrieve partial text). The mention of 'SANS authentification' further clarifies the access method.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: use this tool to get full text of a decision identified by an ID from search_judiciaire_libre. The description names the sibling tool (search_judiciaire_libre) as the source of the ID, creating a clear workflow dependency and distinguishing when to use this versus other search or retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_textAInspect
Extraction du texte intégral d'une décision relevant de l'ordre administratif (Conseil d'État, TA, CAA).
Usage strictement réservé aux identifiants normés issus des recherches
administratives : `DCE_XXX_YYYYMMDD` (Conseil d'État),
`DTA_XXX_YYYYMMDD` (TA), `DCAA_XXX_YYYYMMDD` (CAA).
INCOMPATIBILITÉS MAJEURES :
- Identifiants ArianeWeb `/Ariane_Web/AW_DCE/|XXXXXX` — procéder à une
ré-indexation via `search_admin` pour obtenir un identifiant
compatible.
- Identifiants JURITEXT — rediriger vers `get_decision_judiciaire_libre`
ou `get_decision_judiciaire`.
- Identifiants CELEX `6XXXXCJXXXX` — rediriger vers `get_decision_cjue`.
- Identifiants HUDOC `001-XXXXXX` — rediriger vers `get_decision_cedh`.
Args:
decision_id: identifiant de la décision (avec ou sans suffixe .xml)
Returns:
Dict comportant les métadonnées complètes, `text_segments` (liste
des paragraphes) et `full_text` (texte intégral joint), ou None si
la décision est introuvable.| Name | Required | Description | Default |
|---|---|---|---|
| decision_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does this well by explaining what the tool returns (metadata, text segments, full text), the None return case for non-existent decisions, and the structure of decision IDs with examples. It also clarifies that decisions include specific legal components (moyens, visas, considérants, dispositif).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement, parameter explanation, and return value description. Every sentence adds value, and the information is front-loaded with the most important details first. The bilingual nature (French with English terms) is handled efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description appropriately focuses on explaining what the tool does rather than detailing return values. It provides complete context about the tool's purpose, parameter usage, behavioral characteristics, and relationship to sibling tools, making it fully adequate for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing rich parameter semantics. It explains what decision_id represents, shows example formats for different jurisdictions, clarifies it can be used with or without .xml suffix, and references where to obtain these IDs from sibling search tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Récupère le texte intégral' - retrieves full text) and resource ('d'une décision' - of a decision) using a precise verb. It distinguishes this tool from sibling search tools by focusing on retrieving full decision text rather than searching for decisions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to get full text of a decision using its ID) and references sibling search tools as the source for obtaining decision IDs. However, it doesn't explicitly state when NOT to use this tool or provide alternatives for similar operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_law_articleAInspect
Renvoie le texte d'un article de loi à une date donnée (ou version
actuelle si date vide).
Particularité justicelibre : quand une décision de 1992 cite
l'article 1128 du Code civil, l'article a été totalement réécrit en
2016. Avec ce tool on récupère le texte **tel qu'il existait en 1992**
(l'ancienne version napoléonienne), pas le texte actuel.
Codes supportés (22) : CC, CP, CPC, CPP, CT, CSP, CJA, CGCT, CRPA,
CPI, CASF, CMF, C.com, C.cons, C.éduc, CU, C.env, CR, CGI, CESEDA,
CSS, CCH.
Args:
code: code court (ex : "CC" pour Code civil, "CT" pour Code du travail)
num: numéro de l'article (ex : "1128", "L1152-1", "132-1")
date: date ISO YYYY-MM-DD (optionnel — si absent, version en vigueur).
Utiliser la date de la décision citante pour obtenir la
version contemporaine de la citation.
Returns:
dict avec `legiarti`, `num`, `code`, `texte`, `etat`
(VIGUEUR/MODIFIE/ABROGE), `date_debut`, `date_fin`, `nota`. Plus
un champ `note` si la version retournée n'est pas celle demandée.| Name | Required | Description | Default |
|---|---|---|---|
| num | Yes | ||
| code | Yes | ||
| date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well. It explains the key behavioral trait: retrieving historical versions rather than current law (the 'Particularité justicelibre'). It also mentions the 22 supported codes and what happens when date is empty (current version). It doesn't cover rate limits, authentication needs, or error conditions, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. The 'Particularité justicelibre' paragraph adds crucial context. The 'Args' and 'Returns' sections are well-structured but could be more integrated. Every sentence earns its place, though the French/English mix slightly affects readability for English-only agents.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (historical law retrieval), no annotations, 0% schema coverage, but with an output schema, the description is remarkably complete. It explains the purpose, historical context, parameters with examples, supported codes, and return structure. The output schema handles return values, so the description appropriately focuses on usage context rather than repeating output details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant meaning beyond the 0% schema coverage. It explains what each parameter represents with examples: 'code' as short codes for different legal codes, 'num' as article numbers with format examples, and 'date' as ISO format with guidance on using the citing decision's date. The 'Args' section provides concrete usage guidance that the schema titles ('Num', 'Code', 'Date') lack entirely.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Renvoie le texte d'un article de loi à une date donnée' (Returns the text of a law article at a given date). It specifies the exact action (return text) and resource (law article), and distinguishes itself from potential siblings by explaining the 'Particularité justicelibre' about historical vs current versions, which is unique among law-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to retrieve historical versions of law articles when analyzing past legal decisions. It explains that using the decision's date gives the contemporary version. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the historical focus implies it's not for current law lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_law_versionsAInspect
Renvoie toutes les versions historiques d'un article de loi, du plus ancien au plus récent.
Utile pour construire une "timeline" de l'article et comprendre son
évolution (ex : un article modifié en 1964, 1994, 2016 aura 3-4 lignes
avec `date_debut`, `date_fin`, `etat`, `texte` distincts).
Args:
code: code court (voir get_law_article pour la liste des 22 codes)
num: numéro de l'article
Returns:
dict avec `code`, `code_long`, `num`, `count`, `versions`
(liste ordonnée par `date_debut` ascendante).| Name | Required | Description | Default |
|---|---|---|---|
| num | Yes | ||
| code | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining the chronological ordering, return format structure, and practical use case. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial behavioral context for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with purpose statement, usage context, example, parameter explanations, and return format - all in 4 focused sentences. Every sentence adds value and the information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (historical version retrieval), no annotations, and the presence of an output schema, the description provides complete context: purpose, usage guidance, parameter semantics, return structure, and ordering behavior. The output schema handles return value details, so the description focuses on what's needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both parameters: 'code' is described as a short code with reference to sibling tool for valid values, and 'num' is clearly identified as the article number. The description adds essential meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('renvoie toutes les versions historiques') and resources ('article de loi'), distinguishing it from siblings like get_law_article. It explicitly explains what historical versions are returned and their ordering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('utile pour construire une timeline...'), references a sibling tool for context ('voir get_law_article pour la liste des 22 codes'), and includes a concrete example of when it would return multiple versions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_juridictionsAInspect
Référentiel exhaustif des codes juridictionnels.
Restitue les 51 instances couvertes (Conseil d'État, 9 CAA, 40 TA,
incluant les juridictions d'outre-mer) accompagnées de leur nomenclature
canonique.
Consulter impérativement cette liste pour déterminer le code exact à
fournir à l'outil `search_admin`.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it lists all accepted jurisdiction codes, specifies the return count (51 jurisdictions), details the composition (Conseil d'État, 9 CAA, 40 TA including 9 overseas), and mentions the canonical name format. However, it lacks information on potential rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific details and usage guidance in two additional sentences. Every sentence adds value: the first states what it does, the second details the output, and the third explains when to use it, with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, output schema exists), the description is complete: it explains the purpose, output details (51 jurisdictions with types and names), and usage context relative to sibling tools. The existence of an output schema means return values need not be explained, and the description covers all necessary aspects effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately adds no parameter-specific information, as none are needed, maintaining clarity without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Liste tous les codes de juridiction') and resource ('juridictions acceptés par `search_juridiction`'), distinguishing it from sibling tools like search_juridiction by focusing on listing codes rather than searching. It explicitly mentions the scope (51 jurisdictions including Conseil d'État, CAA, TA).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Utilise cette liste pour connaître le code à passer à `search_juridiction`') and distinguishes it from the sibling tool search_juridiction by explaining its role in providing codes for that search tool, with clear alternatives implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_law_numberAInspect
Résout un numéro de loi/ordonnance/décret vers son identifiant LEGITEXT ou JORFTEXT Légifrance.
Utile pour les textes non codifiés (lois, ordonnances, décrets) qui ne
sont pas dans la whitelist des 25 codes courts (CC, CP, LIL, LO58, etc.).
Une fois le LEGITEXT/JORFTEXT résolu, on peut l'utiliser avec
`get_law_article(code=<LEGITEXT>, num=<N>)` pour récupérer un article
spécifique.
Exemples :
- `resolve_law_number("68-1250")` → loi prescription quadriennale des
créances publiques (JORFTEXT000000878035)
- `resolve_law_number("79-587")` → loi motivation des actes admin
- `resolve_law_number("2000-321")` → loi droits citoyens face à l'admin
Args:
numero: format "YY-NNNN" ou "YYYY-NNNN" (ex: "68-1250", "2000-321")
Returns:
`{numero, legitext, titre_section, date_debut, articles_count, source_url}`
ou `{error}` si introuvable.| Name | Required | Description | Default |
|---|---|---|---|
| numero | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it resolves a law number to an identifier, provides examples of inputs and outputs, specifies the return structure including success and error cases (`{error}` if not found), and mentions the source (`Légifrance`). However, it doesn't cover potential rate limits, authentication needs, or detailed error handling beyond the basic error return.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by usage context, examples, and parameter/return details. Every sentence adds value—no wasted words—and it's structured logically from general to specific, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, no annotations, but with an output schema), the description is complete enough. It covers purpose, usage, examples, parameter semantics, and return values. The output schema exists, so the description doesn't need to explain return types in detail, but it still summarizes the success and error structures adequately for context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It adds significant meaning: it explains the `numero` parameter as a law/ordinance/decree number in format 'YY-NNNN' or 'YYYY-NNNN', provides examples ('68-1250', '2000-321'), and clarifies it's for non-codified texts. This goes well beyond the schema's basic string type, though it doesn't detail validation rules or edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Résout un numéro de loi/ordonnance/décret vers son identifiant LEGITEXT ou JORFTEXT Légifrance.' It specifies the verb ('resolve'), resource ('law/ordinance/decree number'), and target ('LEGITEXT/JORFTEXT identifier'), and distinguishes it from sibling tools by noting it's for 'non-codified texts' not in the 'whitelist of 25 short codes' like those used by get_law_article.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Utile pour les textes non codifiés (lois, ordonnances, décrets) qui ne sont pas dans la whitelist des 25 codes courts.' It also names an alternative usage pattern: once resolved, the identifier can be used with `get_law_article`. This clearly differentiates it from sibling tools that handle codified laws or other legal searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_adminAInspect
Recherche pondérée par pertinence BM25 sur la jurisprudence administrative complète (Conseil d'État + 9 CAA + 40 TA).
Source : bulk JADE DILA (~4M décisions full text). Contrairement aux
outils `search_admin_recent*` qui trient par date, celui-ci classe par
pertinence sémantique des mots-clés. Indispensable pour trouver LES
bonnes décisions sur un sujet sans dépendre de l'ancienneté.
⚠️ **Si tu cherches par numéro de requête (7 chiffres ex: 2200433)**,
utilise plutôt `get_admin_decision(numero, juridiction=...)` qui fait
un lookup SQL exact. La recherche FTS5 d'un numéro court ne le trouve
que dans les décisions qui le **citent** dans leur texte (ex: décision
de cassation), pas la décision identifiée par ce numéro.
Args:
query: mots-clés (opérateurs FTS5 : AND/OR/NOT, "phrase exacte", mot*)
juridiction: filtre par fragment de nom de juridiction. Ex :
"Lyon" → toutes les décisions Lyon (TA + CAA), "Tribunal
Administratif de Lyon" → uniquement TA Lyon. Combiné en
FTS5 AND avec la query principale.
sort: "relevance" (défaut, BM25) ou "date_desc" / "date_asc"
date_min: limite inférieure ISO YYYY-MM-DD (optionnel)
date_max: limite supérieure ISO YYYY-MM-DD (optionnel)
limit: nombre de résultats (défaut 20, max 50)
offset: pagination
Returns:
{"total", "returned", "decisions": [...]} avec extracts BM25.| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | relevance | |
| limit | No | ||
| query | Yes | ||
| offset | No | ||
| date_max | No | ||
| date_min | No | ||
| juridiction | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits. It explains the search methodology (BM25 semantic relevance ranking), data source characteristics (4M full-text decisions from JADE DILA), jurisdiction scope (Conseil d'État + 9 CAA + 40 TA), and return format with BM25 extracts. It doesn't mention rate limits or authentication requirements, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose first, sibling differentiation second, parameter explanations in a clear Args/Returns format, and no wasted sentences. Every section adds value: the French text establishes context, the sibling comparison provides guidance, and the parameter details are essential given the lack of schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex search tool with 7 parameters, 0% schema description coverage, no annotations, but with output schema, the description provides complete context. It covers purpose, differentiation, behavioral characteristics, detailed parameter semantics, and return format. The output schema handles return value structure, so the description appropriately focuses on operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed semantic explanations for all 7 parameters. It explains query syntax (FTS5 operators), jurisdiction filtering codes, sort options with defaults, date formats, and pagination behavior. The description adds significant value beyond the bare schema, making parameter purposes and usage clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'Recherche pondérée par pertinence BM25 sur la jurisprudence administrative complète' (BM25 relevance-weighted search on complete administrative case law), specifying the exact resource (4M decisions from JADE DILA) and distinguishing it from sibling tools like 'search_admin_recent*' that sort by date instead of semantic relevance. This provides a specific verb+resource combination with explicit sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool versus alternatives: 'Indispensable pour trouver LES bonnes décisions sur un sujet sans dépendre de l'ancienneté' (Essential for finding THE right decisions on a topic without depending on recency), and directly contrasts with 'search_admin_recent*' tools that sort by date. This provides clear guidance on when this tool is preferred over its siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_admin_recentAInspect
Décisions admin récentes triées chronologiquement (API live).
Priorité au récent : tri par date de lecture décroissante, pas par
pertinence. Utile pour "actualité d'une juridiction" mais PAS pour
trouver la jurisprudence pertinente sur un sujet — pour cela, utiliser
`search_admin` (bulk JADE avec BM25 ranking).
Périmètre : CE + 9 CAA + 40 TA (incluant l'outre-mer), depuis ~2022.
Les identifiants générés (formats `DCE_*`, `DTA_*`, `DCAA_*`) sont
nativement compatibles avec l'outil `get_decision_text`.
Args:
query: mots-clés de recherche
juridiction: code de la juridiction. Exemples :
- "CE" — Conseil d'État
- "CE-CAA" — Conseil d'État + cours administratives d'appel
- "TA69" — Tribunal administratif de Lyon
- "TA75" — Tribunal administratif de Paris
- "CAA69" — Cour administrative d'appel de Lyon
Les codes "TA" ou "CAA" isolés retournent un résultat vide —
un code spécifique est requis. Consulter `list_juridictions`
pour la nomenclature complète.
limit: nombre maximum de résultats (défaut 20)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| juridiction | No | CE |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and does well: it explains sorting (chronological, not by relevance), scope (jurisdictions and time range), and output compatibility with 'get_decision_text'. It could improve by mentioning rate limits or authentication needs, but covers key behavioral aspects clearly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and front-loaded: starts with core purpose, then usage guidelines, scope, and parameter details. Every sentence adds value, with no wasted words, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with jurisdiction filtering) and no annotations, the description is highly complete: it covers purpose, usage, behavior, and all parameters in detail. The presence of an output schema means return values don't need explanation, and sibling context is well-integrated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate—and it does thoroughly. It explains all three parameters: 'query' as keywords, 'juridiction' with examples and warnings, and 'limit' with its default. This adds significant meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for recent administrative decisions sorted chronologically, distinguishing it from sibling tools like 'search_admin' which uses relevance ranking. It specifies the scope (CE + 9 CAA + 40 TA since ~2022) and mentions compatibility with 'get_decision_text'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: use for 'actualité d'une juridiction' but NOT for finding relevant jurisprudence on a topic—for that, use 'search_admin'. It also references 'list_juridictions' for complete jurisdiction codes and warns that isolated 'TA' or 'CAA' codes return empty results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_admin_recent_all_caaAInspect
Requête simultanée de l'ensemble des 9 Cours Administratives d'Appel.
Fusion et tri chronologique des résultats par date de lecture.
Args:
query: mots-clés de recherche
limit_per_court: résultats par cour (défaut 5, soit jusqu'à 45
résultats au total)
total_limit: plafond global après fusion (0 = aucun plafond).| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| total_limit | No | ||
| limit_per_court | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: simultaneous querying across multiple courts, merging of results, chronological sorting by date, and default pagination behavior (limit_per_court default 5, total_limit default 0). However, it doesn't mention authentication requirements, rate limits, error conditions, or what the output schema contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement followed by a well-formatted Args section. Every sentence earns its place: the first explains the core functionality, the second describes result processing, and the parameter explanations are concise yet informative. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (multi-court search with merging), no annotations, and the presence of an output schema, the description is reasonably complete. It covers purpose, usage context, parameter semantics, and key behaviors. The output schema existence means return values don't need explanation, but additional behavioral context (like error handling) would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining all 3 parameters in the Args section. It clarifies that 'query' accepts keywords, 'limit_per_court' controls results per court with default 5, and 'total_limit' is a global cap after merging with default 0 meaning no limit. This adds meaningful context beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a simultaneous query across all 9 Administrative Courts of Appeal, merges results, and sorts them chronologically by reading date. It specifies the verb ('Requête simultanée'), resource ('9 Cours Administratives d'Appel'), and distinguishes from siblings like search_admin_recent (single court) or search_admin_recent_all_ta (different jurisdiction).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly indicates this tool is for searching across all 9 courts simultaneously with merged results, distinguishing it from alternatives like search_admin_recent (single court) or search_admin_recent_all_ta (different court type). The context of 'simultaneous query' and 'fusion' provides clear when-to-use guidance versus single-court searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_admin_recent_all_taAInspect
Requête simultanée de l'ensemble des 40 Tribunaux Administratifs.
Fusionne et trie chronologiquement (date de lecture décroissante) les
résultats issus du territoire national. Pertinent pour cartographier
rapidement les éventuelles divergences d'appréciation territoriale sur
une même question de droit.
Args:
query: mots-clés de recherche
limit_per_court: nombre de résultats par tribunal (défaut 5, soit
jusqu'à 200 résultats totaux en l'absence de `total_limit`)
total_limit: plafond global après fusion (0 = aucun plafond). Si
positif, tronque la liste fusionnée aux N entrées les plus
récentes.
Returns:
Dict comportant `per_court_totals` (nombre de hits par TA),
`decisions` (liste fusionnée triée chronologiquement) et les
éventuelles `errors`.| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| total_limit | No | ||
| limit_per_court | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it queries 40 courts simultaneously, merges and sorts results chronologically by descending date, handles errors with an errors field, and explains result limits (default 5 per court, up to 200 total without total_limit). It doesn't mention rate limits or authentication needs, but covers core operational traits adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The Args and Returns sections are clearly structured. Minor deduction because the French phrasing is slightly verbose in places, but every sentence earns its place by adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (querying 40 courts, merging, sorting), no annotations, and an output schema present, the description is complete. It explains the purpose, usage context, parameters, return structure (per_court_totals, decisions, errors), and behavioral details like chronological sorting and error handling. The output schema means return values don't need explanation here.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate fully. It provides detailed semantics for all 3 parameters: 'query' as keywords, 'limit_per_court' with default 5 and implication of up to 200 total results, and 'total_limit' as global cap with 0 meaning unlimited and positive values truncating to most recent N entries. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a simultaneous query across all 40 Administrative Tribunals, merges and chronologically sorts results, and is relevant for mapping territorial divergences on legal questions. It specifies the verb ('Requête simultanée'), resource ('40 Tribunaux Administratifs'), and distinguishes from siblings like search_admin or search_admin_recent by covering all courts nationally.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Pertinent pour cartographier rapidement les éventuelles divergences d'appréciation territoriale sur une même question de droit.' This provides clear context for usage versus alternatives like search_admin (single court) or search_admin_recent (recent from unspecified scope).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_allAInspect
Recherche fédérée pondérée par pertinence sur toutes les sources.
Tool ONE-STOP quand on ne sait pas où chercher : interroge en parallèle
les sources locales (DILA judic, JADE admin, LEGI, CEDH, CJUE) et
retourne une liste fusionnée triée par score BM25 avec un bonus
d'autorité (CE/Cass/CEDH > CAA > TA/CA).
Args:
query: mots-clés (ou phrase). Si `expand_synonyms=True` (défaut),
les termes du thésaurus juridique FR sont automatiquement
étendus à leurs équivalents (ex: "harcèlement" → aussi
"intimidation", "vexation morale", etc.)
sources: liste optionnelle parmi ["dila", "jade", "legi", "cedh",
"cjue"]. None = toutes.
sort: "relevance" (défaut) ou "date_desc"
date_min, date_max: ISO YYYY-MM-DD
limit: nombre de résultats fusionnés (défaut 30, max 100)
expand_synonyms: active le thésaurus (défaut True)
Returns:
dict {"query_expanded", "per_source_counts", "results": [...]}| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | relevance | |
| limit | No | ||
| query | Yes | ||
| sources | No | ||
| date_max | No | ||
| date_min | No | ||
| expand_synonyms | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: parallel querying of multiple sources, BM25 scoring with authority bonuses, and thesaurus-based synonym expansion. It mentions default values and limits (max 100 results). However, it doesn't explicitly address rate limits, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a clear purpose statement, usage guidance, and detailed parameter explanations. Every sentence adds value. It could be slightly more concise in the parameter section, but overall it's efficiently organized with zero wasted content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, federated search across multiple sources) and the presence of an output schema, the description provides excellent context. It explains the search methodology, scoring algorithm, source options, parameter behaviors, and return structure. The output schema handles return value documentation, allowing the description to focus on operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing detailed semantic explanations for all 7 parameters. It explains query expansion behavior with thesaurus synonyms, lists valid source options, describes sorting options, clarifies date format requirements, specifies default and maximum limits, and explains the expand_synonyms toggle. This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'Recherche fédérée pondérée par pertinence sur toutes les sources' (federated relevance-weighted search across all sources), specifying it queries multiple legal sources in parallel and returns merged results sorted by BM25 score with authority bonuses. It explicitly distinguishes this as the 'ONE-STOP' tool when unsure where to search, differentiating it from sibling tools that search specific sources like search_cedh or search_legi.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: 'Tool ONE-STOP quand on ne sait pas où chercher' (when you don't know where to search). It contrasts this comprehensive search with sibling tools that target specific sources, giving clear context for when to use this broad-scope tool versus more targeted alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_ccAInspect
Recherche dédiée au Conseil constitutionnel (7 112 décisions).
Quatrième pouvoir juridictionnel français aux côtés de la Cour de cassation,
du Conseil d'État et de la Cour de justice de la République. Contrôle la
constitutionnalité des lois (contrôle *a priori* via DC, *a posteriori* via
QPC) et les élections nationales.
Args:
query: mots-clés (opérateurs FTS5)
nature: filtre optionnel par type de décision :
- "QPC" : Question Prioritaire de Constitutionnalité
(contrôle a posteriori, saisine par justiciable via CE/Cass)
- "DC" : Décision sur conformité de loi ordinaire ou organique
(contrôle a priori avant promulgation)
- "L" : Lois diverses, délégalisation
- "AN" : Élections législatives, inéligibilités
- "SEN" : Élections sénatoriales
- "PDR" : Élection présidentielle
- "ORGA": Organisation (règlement intérieur, composition)
- "REF" : Référendum
- "ELEC": Autres élections
- "I" : Incompétence
(si vide, toutes natures confondues)
date_min, date_max: ISO YYYY-MM-DD
limit: max 50 (défaut 20)
offset: pagination
Returns:
`{"total", "returned", "nature_filter", "decisions": [...]}`| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| nature | No | ||
| offset | No | ||
| date_max | No | ||
| date_min | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and excels by disclosing key behavioral traits: it specifies the database size (7,112 decisions), mentions FTS5 operators for querying, explains pagination behavior (limit default 20, max 50, offset), describes the return format with field names, and clarifies that empty nature parameter returns all types. This goes well beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (introduction, args, returns) and uses bullet points for nature values. While comprehensive, it's appropriately sized for a complex tool with 6 parameters and no schema descriptions. Every sentence adds value, though the constitutional context paragraph could be slightly condensed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex search tool with 6 parameters, 0% schema coverage, no annotations, but with an output schema, the description is remarkably complete. It covers purpose, domain context, all parameter semantics, behavioral details (pagination, defaults, operators), and even previews the return structure. The output schema handles return values, so the description appropriately focuses on usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description fully compensates by providing comprehensive parameter documentation: explains query accepts FTS5 operators, details all 10 possible values for nature with explanations of each, specifies date format (ISO YYYY-MM-DD), indicates limit range and default, and explains offset for pagination. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches decisions from the French Constitutional Council (Conseil constitutionnel), specifying it covers 7,112 decisions. It distinguishes this from sibling tools by focusing exclusively on this specific judicial body, unlike other tools that search different jurisdictions (CEDH, CJUE, Conseil d'État, etc.) or broader searches (search_all).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool by explaining the Constitutional Council's role in constitutional review and elections, which helps the agent understand the domain. However, it doesn't explicitly contrast when to use this versus sibling tools like search_all or search_judiciaire, nor does it mention any prerequisites or exclusions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cedhAInspect
Recherche textuelle dans la jurisprudence de la Cour européenne des droits de l'homme.
Exploitation de l'index localisé regroupant les ~76 000 documents
HUDOC francophones (arrêts, décisions, rapports de Chambre, Grande
Chambre, Comité). Libre d'accès.
Args:
query: mots-clés (ex : "article 8 vie familiale", "garde à vue")
limit: nombre maximum de résultats (défaut 20)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing important behavioral traits: it specifies the data source (local index of ~76,000 French HUDOC documents), document types included (arrêts, décisions, rapports de Chambre, Grande Chambre, Comité), and authentication requirements ('Aucune authentification'). It doesn't mention rate limits or pagination behavior, which would be helpful additions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise: a clear purpose statement, important context about the data source, authentication note, and parameter explanations with examples. Every sentence earns its place, and information is front-loaded with the most important details first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and good parameter coverage in the description, this is quite complete. The description covers purpose, data source scope, authentication, and parameter semantics. It could be slightly more complete by mentioning result format or pagination, but the output schema likely addresses this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining both parameters: 'query' is described as keywords with examples ('article 8 vie familiale', 'garde à vue'), and 'limit' is explained as maximum number of results with its default value (20). This adds meaningful context beyond the basic schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Recherche dans la jurisprudence') and resource ('Cour européenne des droits de l'homme'), with details about the local index of ~76,000 French HUDOC documents. It distinguishes from siblings by specifying this is for European Court of Human Rights jurisprudence only, unlike other tools for different courts/jurisdictions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (searching European Court of Human Rights jurisprudence in French), but doesn't explicitly mention when not to use it or name specific alternatives among the many sibling tools. The sibling list includes tools for other courts like CJUE and Conseil d'État, but the description doesn't guide the agent on choosing between them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cjueAInspect
Recherche textuelle dans la jurisprudence de la Cour de justice de l'Union européenne.
Exploitation de l'index localisé des décisions de la CJUE, du Tribunal
de l'UE, des ordonnances et des conclusions des avocats généraux
(données EUR-Lex). Libre d'accès.
Args:
query: mots-clés (ex : "libre circulation capitaux", "CJUE C-72/24")
limit: nombre maximum de résultats (défaut 20)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses useful behavioral traits: it's a search operation using a local index via EUR-Lex, requires no authentication, and has a default limit. However, it doesn't mention rate limits, pagination, error conditions, or what the output contains beyond results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences and a parameter section. It's front-loaded with the main purpose, followed by implementation details and parameters. Some minor redundancy exists (mentioning CJUE twice), but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and simple parameters, the description is reasonably complete. It covers purpose, source, authentication, and parameters adequately. The main gap is lack of explicit sibling differentiation, but overall it provides sufficient context for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides meaningful semantics for both parameters: 'query' as keywords with examples, and 'limit' as maximum results with default value. This adds substantial value beyond the bare schema, though it doesn't cover all possible parameter nuances.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches jurisprudence from the Court of Justice of the European Union, specifying the sources (CJUE decisions, Tribunal UE, orders, conclusions) and that it uses EUR-Lex. It distinguishes from some siblings by focusing on CJUE, but doesn't explicitly differentiate from all search_* tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching CJUE jurisprudence and mentions no authentication required, but doesn't provide explicit guidance on when to use this versus other search_* tools like search_cedh or search_judiciaire. The context is clear but lacks comparative alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_cnilBInspect
Recherche dans les délibérations de la CNIL.
Source : bulk CNIL (109 Mo, ~26k délibérations). Utile pour le droit
des données personnelles, RGPD, traitements algorithmiques.| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| offset | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the source data (bulk CNIL, 109 MB, ~26k deliberations), which adds useful context about scale and scope. However, it lacks details on behavioral traits such as rate limits, authentication needs, response format, or pagination behavior (though offset and limit parameters hint at pagination). This leaves significant gaps for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three concise sentences that are front-loaded: first stating the core function, then source details, and finally domain relevance. There's minimal waste, though the French text might require translation for some agents, slightly affecting clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, 0% schema coverage, but an output schema exists, the description is moderately complete. It covers purpose and context well but lacks parameter explanations and behavioral details. The output schema likely handles return values, so that gap is mitigated, but overall it's adequate with clear omissions for a search tool with multiple parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It does not explain any parameters (query, limit, offset) or their semantics (e.g., what type of query is supported, how limit and offset affect results). The description adds no parameter-specific information beyond the schema, failing to address the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches within CNIL deliberations, specifying the source (bulk CNIL with size and count) and domain relevance (data protection law, GDPR, algorithmic processing). It distinguishes from siblings by focusing on CNIL deliberations rather than other legal sources like CEDH, CJUE, or judicial decisions, though it doesn't explicitly contrast with other search tools like search_admin or search_legi.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by mentioning the tool's usefulness for data protection law, GDPR, and algorithmic processing topics, suggesting when it might be appropriate. However, there's no explicit guidance on when to use this versus other search tools (e.g., search_admin for administrative decisions or search_legi for legislation), and no exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_conseil_etatAInspect
Recherche sémantique ciblée sur la jurisprudence du Conseil d'État (base ArianeWeb, ~270 000 décisions).
Moteur exclusif disposant d'un véritable algorithme de pertinence
(Sinequa) avec extraction de contexte. À privilégier systématiquement
pour le droit public.
ATTENTION : les identifiants retournés (format
`/Ariane_Web/AW_DCE/|XXXXXX`) sont inopérants pour l'extraction de
texte. Pour récupérer l'intégralité d'un arrêt, ré-indexer la recherche
via `search_admin` (paramètre `juridiction="CE"` et un extrait de
la requête) afin d'obtenir un identifiant compatible
(`DCE_XXX_YYYYMMDD`).
Consigne de recherche : limiter les requêtes à 2-5 mots-clés
distinctifs ; les requêtes en phrase complète retournent généralement
zéro résultat.
Args:
query: mots-clés de recherche (ex : "référé liberté", "QPC 145")
limit: nombre maximum de résultats (défaut 20)
offset: décalage pour paginer (défaut 0). Réitérer avec offset=20,
offset=40, etc. pour obtenir les pages suivantes.| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| offset | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it returns highlighted extracts and relevance scores from Sinequa, and mentions the default limit of 20 results. However, it doesn't cover important aspects like authentication requirements, rate limits, pagination, or error conditions that would be crucial for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The second sentence adds valuable context about the data source. The parameter explanations are clear and efficient. Minor improvement could be made by integrating the parameter details more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and good parameter coverage in the description, this is fairly complete. The description covers purpose, source context, and parameter semantics. The main gap is lack of behavioral details like authentication or rate limits that would be important for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining both parameters. It provides the meaning of 'query' with concrete examples ('référé liberté', 'QPC 145') and explains 'limit' as maximum number of results with its default value. This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Recherche plein texte' - full-text search), the target resource ('décisions du Conseil d'État' - decisions of the Council of State), and the platform ('via ArianeWeb'). It distinguishes from siblings by specifying this searches the richest source for CE jurisprudence with ~270,000 decisions, unlike the more general search tools listed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: for full-text search in CE decisions via ArianeWeb, which is described as the richest source for CE jurisprudence. It doesn't explicitly state when not to use it or name specific alternatives, but the context implies this is specialized for CE decisions rather than other jurisdictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_decisions_citingAInspect
Cherche les décisions qui citent EXPLICITEMENT un article de loi donné.
Exploite l'index FTS5 sur les sources jurisprudence disponibles pour
matcher les formulations courantes de citation (`"article 1382 du code
civil"`, `"art. L. 1152-1 du Code du travail"`, etc.). Cross-référencement
inverse : partant d'un article, on trouve la jurisprudence pertinente.
**LIMITATION CONNUE** : ce tool trouve UNIQUEMENT les citations explicites
du numéro d'article. Il ne capte PAS :
- les références indirectes ("conformément aux dispositions du Code civil
relatives à la responsabilité délictuelle…")
- les renvois à une section entière sans numéro précis
- les citations du code par abréviation seule sans article ("en vertu du CT")
Pour une recherche conceptuelle plus large, préférer `search_all` avec
l'expansion thésaurus (ex: "harcèlement" → inclut "intimidation" etc.).
Args:
code: code court de l'article (ex : "CT", "CC")
num: numéro de l'article (ex : "L1152-1", "1240")
sources: liste optionnelle de sources à interroger parmi
["dila", "jade", "cedh", "cjue"]. Par défaut : toutes.
limit: nombre de décisions par source (défaut 20, max 50)
Returns:
dict `{"code", "num", "total", "per_source": {source: count},
"decisions": [{source, id, juridiction, date, title, extract}]}`| Name | Required | Description | Default |
|---|---|---|---|
| num | Yes | ||
| code | Yes | ||
| limit | No | ||
| sources | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it uses FTS5 indexing for matching citation patterns, specifies default sources and limit values, and describes the return structure. It doesn't mention rate limits, authentication needs, or error handling, but for a search tool, the coverage is reasonably comprehensive given the context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized. It starts with the core purpose, explains the technical approach, details parameters with examples, and specifies the return format. Every sentence adds value—no fluff or repetition. It's front-loaded with the main functionality, making it easy to scan.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, no annotations, but with an output schema), the description is complete. It covers purpose, usage context, parameters with semantics, and the return structure. The output schema exists, so the description doesn't need to explain return values in detail—it succinctly summarizes them. This is adequate for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate fully. It does so by explaining all 4 parameters: 'code' (short code like 'CT', 'CC'), 'num' (article number like 'L1152-1'), 'sources' (optional list with enum values and default), and 'limit' (default and max values). It provides examples and clarifies semantics beyond the bare schema, adding significant value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Cherche les décisions qui citent explicitement un article de loi donné' (Searches for decisions that explicitly cite a given law article). It specifies the verb 'cherche' (search) and resource 'décisions' (decisions) with the specific constraint 'citées' (citing). It distinguishes from siblings like get_decision_* (which fetch specific decisions) and search_* tools (which search by other criteria like jurisdiction).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: 'Exploite l'index FTS5...' (Uses FTS5 index...) and 'Cross-référencement inverse : partant d'un article, on trouve la jurisprudence pertinente' (Reverse cross-referencing: starting from an article, find relevant case law). This implies when to use it—when you have a specific law article and want decisions citing it. However, it doesn't explicitly state when NOT to use it or name alternatives among siblings (e.g., search_judiciaire for broader searches).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_jorfAInspect
Recherche dans le Journal officiel (JORF post-1990).
Source : bulk JORF DILA (1,1 Go). Contient les textes publiés au JO
non codifiés : lois, décrets, arrêtés, circulaires, ordonnances.
Args:
query: mots-clés FTS5
nature: filtre optionnel ("LOI", "DECRET", "ARRETE", "CIRCULAIRE"...)
date_min/date_max: fourchette de publication (ISO)
limit: max 50
Returns:
{"total", "returned", "textes": [...]}| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| nature | No | ||
| offset | No | ||
| date_max | No | ||
| date_min | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the data source (bulk JORF DILA, 1.1GB) and content scope (non-codified texts), and mentions the limit constraint (max 50). However, it doesn't cover authentication needs, rate limits, error conditions, or pagination behavior beyond the limit parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose first, then source/scope, followed by parameter explanations and return format. Every sentence adds value with no redundancy or fluff. The bullet-like format for parameters is clear without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 6 parameters, no annotations, but with output schema provided, the description is quite complete. It covers purpose, scope, all parameters, and return structure. The main gap is lack of behavioral details like pagination (offset exists in schema but not mentioned) or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining all 6 parameters: query (FTS5 keywords), nature (filter with examples), date_min/date_max (publication range in ISO format), and limit (max 50). It also clarifies that query is required and nature is optional.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches the French Official Journal (JORF post-1990) and specifies the source (bulk JORF DILA) and content types (laws, decrees, orders, circulars, ordinances). It distinguishes from siblings by focusing on JORF content rather than judicial decisions, administrative rulings, or other legal sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (searching JORF publications post-1990). However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools, though the content focus implies alternatives like search_legi for codified law.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_judiciaireAInspect
Recherche dans la jurisprudence judiciaire via l'API officielle PISTE (authentification OAuth2 requise).
Périmètre : Cour de cassation, cours d'appel, tribunaux judiciaires,
tribunaux de commerce. À n'utiliser qu'en dernier recours ou pour des
décisions récentes absentes de la base libre DILA, compte tenu de
l'entrave technique imposée par la Cour de cassation.
Deux méthodes d'authentification disponibles :
1. `session_token` : jeton temporaire obtenu sur
justicelibre.org/tutoriel-piste.html (procédé recommandé, préserve
la confidentialité des identifiants).
2. `client_id` + `client_secret` : identifiants PISTE directs
(transmission en chat déconseillée).
Args:
query: mots-clés de recherche
session_token: jeton justicelibre temporaire (obtenu via le
formulaire du site)
client_id: Client ID PISTE (alternative au session_token)
client_secret: Client Secret PISTE (alternative au session_token)
juridiction: filtre optionnel — "cc" (Cour de cassation), "ca"
(cours d'appel), "tj" (tribunaux judiciaires), "tcom"
(tribunaux de commerce). Vide = toutes juridictions.
limit: nombre maximum de résultats (défaut 20, maximum 50)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| client_id | No | ||
| juridiction | No | ||
| client_secret | No | ||
| session_token | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing authentication requirements (two methods with security recommendations), rate limits (max 50 results), and default behavior (limit default 20). It also mentions the scope of courts covered. However, it doesn't describe the output format or pagination behavior, which would be helpful given the search nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose first, followed by authentication details and parameter explanations. Every sentence adds value, though the authentication section is somewhat lengthy. The structure is logical but could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, authentication requirements, search functionality) with no annotations but an output schema, the description is quite complete. It covers purpose, authentication, all parameters, and behavioral constraints. The output schema existence means return values don't need explanation, making this description adequate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining all 6 parameters in the Args section, adding meaning beyond the bare schema. It clarifies authentication alternatives, jurisdiction filter options with codes, and limit defaults/maximums. The only gap is that 'query' parameter semantics could be more detailed (e.g., search syntax).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches judicial jurisprudence from specific courts (Cour de cassation, cours d'appel, etc.), providing a specific verb ('Recherche') and resource. However, it doesn't explicitly distinguish this tool from sibling tools like 'search_judiciaire_libre' or 'search_conseil_etat', which appear to be related search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (searching judicial jurisprudence) and offers explicit authentication guidance with two methods, including recommendations. However, it doesn't specify when to choose this tool over sibling tools like 'search_judiciaire_libre' or 'search_conseil_etat', which limits full alternative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_judiciaire_libreAInspect
Recherche plein texte dans la jurisprudence judiciaire, exécutée localement et affranchie de toute obligation d'authentification gouvernementale.
Exploite l'index FTS5 des archives publiques DILA (~620 000 décisions :
Cour de cassation, 36 cours d'appel, Conseil constitutionnel). Scoring
BM25 disponible mais tri appliqué par ordre chronologique décroissant.
Pour cibler une jurisprudence spécifique plutôt que récente, restreindre
`limit` et privilégier des mots-clés distinctifs.
Les identifiants retournés (format `JURITEXT*` pour Cass / cours
d'appel, `CONSTEXT*` pour Conseil constitutionnel) sont compatibles
avec `get_decision_judiciaire_libre`.
Args:
query: mots-clés (ex : "licenciement abusif", "garde enfant"). FTS5
supporte les opérateurs : `"phrase exacte"`, `mot1 AND mot2`,
`mot1 OR mot2`, `mot*` (préfixe).
juridiction: filtre optionnel : "cassation" (Cour de cassation) ou
"appel" (cours d'appel). Vide = toutes juridictions.
limit: nombre maximum de résultats (défaut 20)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| juridiction | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: no authentication required, uses local DILA archives (Cour de cassation + cours d'appel), covers approximately 217,000 decisions, and has default result limiting. It doesn't mention rate limits, error conditions, or response format details, but provides substantial operational context for an unauthenticated search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured and front-loaded. The first sentence establishes the core purpose and key differentiator (no authentication). Subsequent sentences provide essential context about the data source and scope. The parameter explanations are clear and economical with helpful examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no annotations, but has output schema), the description is reasonably complete. It covers authentication status, data source, scope, and all parameters. With an output schema present, it doesn't need to explain return values. The main gap is lack of information about search capabilities (e.g., boolean operators, field-specific searches) or result format hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining all three parameters. It defines 'query' as search keywords with examples, 'juridiction' as an optional filter with valid values ('cassation' or 'appel'), and 'limit' as maximum results with default value. This adds meaningful context beyond the bare schema, though it doesn't specify parameter constraints like length limits or format requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Recherche dans la jurisprudence judiciaire SANS authentification' (Search in judicial jurisprudence WITHOUT authentication). It specifies the resource (judicial jurisprudence from DILA archives), distinguishes from authenticated alternatives by emphasizing 'SANS authentification', and differentiates from siblings like 'search_judiciaire' (which likely requires authentication) by highlighting the no-account feature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for searching judicial jurisprudence without authentication, using the local DILA archive index with ~217,000 decisions. It implies this is the tool for unauthenticated searches, but doesn't explicitly state when NOT to use it or name specific alternatives (though siblings like 'search_judiciaire' exist). The guidance is helpful but not exhaustive about exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_kaliBInspect
Recherche dans les conventions collectives et accords de branche (KALI).
Source : bulk KALI DILA (745 Mo). Couvre les conventions collectives
nationales, accords de branche, avenants, identifiés par leur IDCC.
Args:
query: mots-clés
idcc: filtre optionnel par IDCC (4 chiffres, ex "1486" pour bureaux
d'études techniques)
limit: max 50| Name | Required | Description | Default |
|---|---|---|---|
| idcc | No | ||
| limit | No | ||
| query | Yes | ||
| offset | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the data source size (745 Mo) and coverage scope, but doesn't disclose important behavioral traits: whether this is a read-only operation, performance characteristics, rate limits, authentication needs, or what happens when limits are exceeded. The limit parameter is documented but without context about consequences.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with clear front-loading of purpose. The French introduction is concise, and the Args section efficiently documents key parameters. However, the formatting could be improved - the Args section mixes parameter documentation with general description, and there's some redundancy in explaining IDCC format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which means return values are documented elsewhere), the description covers basic purpose and most parameters. However, for a search tool with 4 parameters and no annotations, it should provide more behavioral context about result format, pagination strategy (offset is undocumented), and performance expectations. The 0% schema coverage increases the burden on the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate, and it does well for 3 of 4 parameters. It explains 'query' as keywords, 'idcc' as optional filter with format example, and 'limit' with maximum value. However, it omits the 'offset' parameter entirely, which is a significant gap since pagination is a key aspect of search functionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches collective bargaining agreements and branch agreements (KALI) using keywords and optional filters. It specifies the source (bulk KALI DILA) and coverage (national conventions, branch agreements, amendments identified by IDCC). However, it doesn't explicitly differentiate from sibling tools like search_legi or search_jorf, which appear to search different legal domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. The sibling list includes many search tools (search_legi, search_jorf, search_admin, etc.), but the description doesn't indicate this is specifically for labor agreements or when it should be preferred over other search tools. Only basic parameter usage is mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_legiAInspect
Recherche pondérée dans les codes et lois consolidés français.
Source : bulk LEGI DILA (3,6 Go avec versions historiques). Trouve
les articles dont le texte ou le titre contient les mots-clés.
Args:
query: mots-clés FTS5
code: filtre optionnel sur un code spécifique (CC, CT, CSP...)
date_min/date_max: filtre par date_debut de version (ISO)
limit: max 50
offset: pagination
Returns:
{"total", "returned", "articles": [...]}| Name | Required | Description | Default |
|---|---|---|---|
| code | No | ||
| limit | No | ||
| query | Yes | ||
| offset | No | ||
| date_max | No | ||
| date_min | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: the search is weighted (FTS5), includes historical versions, and returns paginated results. However, it misses details like rate limits, authentication requirements, error conditions, or whether it's read-only/destructive. The description adds value but doesn't fully compensate for the lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by source context, then parameter details in a structured Args/Returns format. Every sentence earns its place, though minor formatting could be improved for readability (e.g., bullet points).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no annotations, but with output schema present, the description provides good context. It explains the search logic, data source, parameter purposes, and return structure. The output schema means return values needn't be detailed in description. It's nearly complete but could benefit from mentioning error cases or performance characteristics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It successfully adds semantic meaning for all 6 parameters: explains 'query' as FTS5 keywords, 'code' as filter for specific codes (CC, CT, CSP), date filters for version start dates in ISO format, and pagination parameters with constraints ('max 50'). This goes well beyond the bare schema, though some format examples would make it a 5.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a 'weighted search in French consolidated codes and laws' using 'FTS5 keywords' on a specific dataset ('bulk LEGI DILA'). It distinguishes itself from siblings by focusing on legislative text search rather than judicial decisions or administrative rulings, as seen in sibling names like search_judiciaire or search_admin.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying the data source ('bulk LEGI DILA') and search scope ('codes and laws consolidated'), but does not explicitly state when to use this tool versus alternatives like get_law_article or search_jorf. It provides clear technical constraints (e.g., 'limit: max 50') but lacks comparative guidance with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!