justicelibre
Server Details
Free access to ~1M French administrative court decisions via MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Dahliyaal/justicelibre
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Multiple tools have overlapping purposes that could cause confusion. get_decision_judiciaire, get_decision_judiciaire_libre, and get_decision_text all retrieve full decision text with subtle authentication and identifier differences. Similarly, search_judiciaire and search_judiciaire_libre both search judicial jurisprudence with authentication differences. The boundaries between these tools are unclear and could lead to misselection.
Most tools follow a consistent verb_noun pattern (get_decision_judiciaire, list_juridictions, search_all_cours_appel, etc.). The main deviation is get_decision_text which breaks the French naming pattern used elsewhere. Overall, the naming is predictable and readable with only minor inconsistency.
With 10 tools, this is well-scoped for a legal decision search and retrieval server. Each tool appears to serve a distinct purpose within the domain, covering search across different court types, jurisdiction listing, and decision retrieval. The count aligns well with the apparent scope of providing access to French judicial and administrative decisions.
The tool set provides comprehensive search capabilities across judicial and administrative jurisdictions, with good coverage of different court types. Minor gaps exist in update/delete operations, but these aren't expected in a read-only legal database. The surface supports core workflows of searching, filtering by jurisdiction, and retrieving full decision texts with reasonable completeness.
Available Tools
10 toolsget_decision_judiciaireCInspect
Récupère le texte intégral d'une décision judiciaire.
Args:
decision_id: identifiant de la décision Judilibre
session_token: token justicelibre temporaire (recommandé)
client_id: votre Client ID PISTE (alternative)
client_secret: votre Client Secret PISTE (alternative)| Name | Required | Description | Default |
|---|---|---|---|
| client_id | No | ||
| decision_id | Yes | ||
| client_secret | No | ||
| session_token | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions authentication requirements (session_token or client_id/client_secret) which is useful, but doesn't describe other behavioral traits like rate limits, error handling, response format, or whether it's a read-only operation. The description is minimal and leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with a clear purpose statement followed by parameter explanations. The structure is front-loaded with the main purpose. However, the parameter explanations could be more efficiently integrated, and there's some redundancy in describing authentication alternatives.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), 4 parameters with 0% schema coverage, and no annotations, the description provides basic purpose and parameter semantics but lacks usage guidance and comprehensive behavioral context. It's minimally adequate but has clear gaps for a tool with authentication requirements and multiple parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It provides brief explanations for all 4 parameters (decision_id, session_token, client_id, client_secret), adding meaning beyond the schema's titles. However, the explanations are minimal (e.g., 'identifiant de la décision Judilibre' doesn't explain format or source), and it doesn't clarify the relationship between authentication parameters or why some are alternatives.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Récupère' (retrieves) and the resource 'texte intégral d'une décision judiciaire' (full text of a judicial decision). It specifies what the tool does but doesn't explicitly differentiate from sibling tools like get_decision_judiciaire_libre or get_decision_text, which likely have similar purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like get_decision_judiciaire_libre or get_decision_text. It mentions authentication alternatives (session_token vs client_id/client_secret) but doesn't explain the context for choosing between them or when to use this tool over other search tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_judiciaire_libreAInspect
Récupère le texte intégral d'une décision judiciaire SANS authentification.
L'identifiant est celui retourné par search_judiciaire_libre (champ `id`,
ex: "JURITEXT000042579700").
Args:
decision_id: identifiant JURITEXT de la décision| Name | Required | Description | Default |
|---|---|---|---|
| decision_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: no authentication required ('SANS authentification'), the tool retrieves full text ('texte intégral'), and it expects a specific ID format (JURITEXT). However, it doesn't mention rate limits, error conditions, or response format details, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly front-loaded with the core purpose in the first sentence. The second sentence clarifies the ID source and format. The Args section is brief and directly relevant. Every sentence earns its place with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool with an output schema (which handles return values), the description is nearly complete. It covers purpose, usage, parameter semantics, and key behavioral aspects. The main gap is lack of error handling or performance characteristics, but given the output schema exists, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It clearly explains the single parameter 'decision_id': identifies it as a JURITEXT ID, provides an example format ('JURITEXT000042579700'), and specifies it comes from search_judiciaire_libre. This adds substantial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Récupère le texte intégral' - retrieves full text) and resource ('d'une décision judiciaire' - of a judicial decision), distinguishing it from siblings like search_judiciaire_libre (which finds decisions) and get_decision_text (which might retrieve partial text). The mention of 'SANS authentification' further clarifies the access method.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: use this tool to get full text of a decision identified by an ID from search_judiciaire_libre. The description names the sibling tool (search_judiciaire_libre) as the source of the ID, creating a clear workflow dependency and distinguishing when to use this versus other search or retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_decision_textAInspect
Récupère le texte intégral d'une décision à partir de son identifiant.
L'identifiant est celui retourné par les outils de recherche dans le
champ `id` (ex: "DCE_503506_20260409" pour le Conseil d'État,
"DTA_2503332_20260331" pour un TA). Les décisions incluent les moyens,
les visas, les considérants et le dispositif.
Args:
decision_id: identifiant de la décision (avec ou sans suffixe .xml)
Returns:
Dict avec les métadonnées complètes, `text_segments` (liste des
paragraphes), et `full_text` (texte intégral joint), ou None si
la décision n'existe pas.| Name | Required | Description | Default |
|---|---|---|---|
| decision_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does this well by explaining what the tool returns (metadata, text segments, full text), the None return case for non-existent decisions, and the structure of decision IDs with examples. It also clarifies that decisions include specific legal components (moyens, visas, considérants, dispositif).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a clear purpose statement, parameter explanation, and return value description. Every sentence adds value, and the information is front-loaded with the most important details first. The bilingual nature (French with English terms) is handled efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description appropriately focuses on explaining what the tool does rather than detailing return values. It provides complete context about the tool's purpose, parameter usage, behavioral characteristics, and relationship to sibling tools, making it fully adequate for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing rich parameter semantics. It explains what decision_id represents, shows example formats for different jurisdictions, clarifies it can be used with or without .xml suffix, and references where to obtain these IDs from sibling search tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Récupère le texte intégral' - retrieves full text) and resource ('d'une décision' - of a decision) using a precise verb. It distinguishes this tool from sibling search tools by focusing on retrieving full decision text rather than searching for decisions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to get full text of a decision using its ID) and references sibling search tools as the source for obtaining decision IDs. However, it doesn't explicitly state when NOT to use this tool or provide alternatives for similar operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_juridictionsAInspect
Liste tous les codes de juridiction acceptés par search_juridiction.
Retourne les 51 juridictions couvertes : Conseil d'État, 9 CAA, 40 TA
(dont 9 en outre-mer), avec leur nom canonique.
Utilise cette liste pour connaître le code à passer à `search_juridiction`.| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it lists all accepted jurisdiction codes, specifies the return count (51 jurisdictions), details the composition (Conseil d'État, 9 CAA, 40 TA including 9 overseas), and mentions the canonical name format. However, it lacks information on potential rate limits or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific details and usage guidance in two additional sentences. Every sentence adds value: the first states what it does, the second details the output, and the third explains when to use it, with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, output schema exists), the description is complete: it explains the purpose, output details (51 jurisdictions with types and names), and usage context relative to sibling tools. The existence of an output schema means return values need not be explained, and the description covers all necessary aspects effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately adds no parameter-specific information, as none are needed, maintaining clarity without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Liste tous les codes de juridiction') and resource ('juridictions acceptés par `search_juridiction`'), distinguishing it from sibling tools like search_juridiction by focusing on listing codes rather than searching. It explicitly mentions the scope (51 jurisdictions including Conseil d'État, CAA, TA).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Utilise cette liste pour connaître le code à passer à `search_juridiction`') and distinguishes it from the sibling tool search_juridiction by explaining its role in providing codes for that search tool, with clear alternatives implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_all_cours_appelAInspect
Recherche une query dans toutes les cours administratives d'appel.
Diffuse aux 9 CAA en parallèle et fusionne les résultats triés par date.
Args:
query: mots-clés de recherche
limit_per_court: résultats par cour (défaut 5)| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| limit_per_court | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it searches across 9 CAA courts in parallel, merges results, and sorts by date. However, it doesn't mention rate limits, authentication needs, error handling, or what the merged output looks like (though an output schema exists). The description adds useful context but leaves gaps for a tool with mutation-like search operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by behavioral details and parameter explanations. Every sentence earns its place with no wasted words, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters, no annotations, and an output schema (which handles return values), the description is fairly complete. It covers purpose, key behavior (parallel search, merging, sorting), and parameter semantics. However, for a search tool with potential complexity across 9 courts, it could benefit from mentioning error cases or result format hints, though the output schema mitigates this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains both parameters: 'query' as search keywords and 'limit_per_court' as results per court with a default of 5. This adds clear meaning beyond the schema's basic types and titles, though it doesn't detail query syntax or limit constraints. For 2 parameters with no schema descriptions, this is good compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for a query across all administrative appeal courts (CAA), specifying the resource ('toutes les cours administratives d'appel') and action ('Recherche'). It distinguishes from siblings like 'search_juridiction' or 'search_conseil_etat' by targeting all CAA courts specifically, though it doesn't explicitly contrast with 'search_all_tribunaux_admin' which might search broader administrative tribunals.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching across all CAA courts in parallel, but doesn't explicitly state when to use this tool versus alternatives like 'search_conseil_etat' (Council of State) or 'search_juridiction' (specific jurisdiction). It mentions the parallel distribution and result merging, which provides some context, but lacks clear guidance on exclusions or specific scenarios where this tool is preferred over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_all_tribunaux_adminAInspect
Recherche une query dans TOUS les tribunaux administratifs en parallèle.
Diffuse la même requête aux 40 TA et fusionne les résultats, triés par
date de lecture décroissante. Utile pour repérer rapidement si une
question a été tranchée différemment selon les tribunaux.
Args:
query: mots-clés de recherche
limit_per_court: nombre de résultats par tribunal (défaut 5,
donc jusqu'à 200 résultats totaux)
Returns:
Dict avec `per_court_totals` (nombre de hits par TA), `decisions`
(liste fusionnée triée par date), et les éventuelles `errors`.| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| limit_per_court | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well: it explains the parallel execution across 40 courts, result merging, sorting by date descending, and error handling. It doesn't mention rate limits or authentication needs, but covers key behavioral aspects like execution pattern and output structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly structured with purpose first, then behavioral details, then parameter explanations, then return format. Every sentence earns its place with zero waste. The French formatting with clear sections enhances readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters with 0% schema coverage and no annotations, the description provides complete context: purpose, usage guidelines, execution behavior, parameter semantics, and return structure. The output schema exists, so description appropriately focuses on semantic explanation rather than repeating return format details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so description must compensate fully. It provides excellent parameter semantics: explains 'query' as search keywords, 'limit_per_court' as results per court with default 5 and total potential results (200). This adds crucial meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches across all administrative tribunals in parallel, broadcasting the same query to 40 courts and merging results. It distinguishes from siblings by emphasizing 'TOUS les tribunaux administratifs en parallèle' and specifying it's for identifying different rulings across courts, unlike jurisdiction-specific search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Utile pour repérer rapidement si une question a été tranchée différemment selon les tribunaux.' This provides clear context for choosing this tool over jurisdiction-specific search alternatives like search_juridiction or search_conseil_etat.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_conseil_etatAInspect
Recherche plein texte dans les décisions du Conseil d'État via ArianeWeb.
Source la plus riche pour la jurisprudence du CE : ~270 000 décisions
d'intérêt jurisprudentiel, avec extraits mis en surbrillance et scores
de pertinence Sinequa.
Args:
query: mots-clés de recherche (ex: "référé liberté", "QPC 145")
limit: nombre maximum de résultats (défaut 20)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it returns highlighted extracts and relevance scores from Sinequa, and mentions the default limit of 20 results. However, it doesn't cover important aspects like authentication requirements, rate limits, pagination, or error conditions that would be crucial for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose in the first sentence. The second sentence adds valuable context about the data source. The parameter explanations are clear and efficient. Minor improvement could be made by integrating the parameter details more seamlessly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values), no annotations, and good parameter coverage in the description, this is fairly complete. The description covers purpose, source context, and parameter semantics. The main gap is lack of behavioral details like authentication or rate limits that would be important for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining both parameters. It provides the meaning of 'query' with concrete examples ('référé liberté', 'QPC 145') and explains 'limit' as maximum number of results with its default value. This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Recherche plein texte' - full-text search), the target resource ('décisions du Conseil d'État' - decisions of the Council of State), and the platform ('via ArianeWeb'). It distinguishes from siblings by specifying this searches the richest source for CE jurisprudence with ~270,000 decisions, unlike the more general search tools listed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool: for full-text search in CE decisions via ArianeWeb, which is described as the richest source for CE jurisprudence. It doesn't explicitly state when not to use it or name specific alternatives, but the context implies this is specialized for CE decisions rather than other jurisdictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_judiciaireAInspect
Recherche dans la jurisprudence judiciaire (Cour de cassation, cours d'appel, tribunaux judiciaires, tribunaux de commerce).
Deux méthodes d'authentification (au choix) :
1. session_token : token temporaire obtenu sur justicelibre.org/tutoriel-piste.html (recommandé, ne compromet pas vos credentials)
2. client_id + client_secret : identifiants PISTE directs (déconseillé en chat)
Args:
query: mots-clés de recherche
session_token: token justicelibre temporaire (obtenu via le formulaire sur le site)
client_id: votre Client ID PISTE (alternative au session_token)
client_secret: votre Client Secret PISTE (alternative au session_token)
juridiction: filtre optionnel : "cc" (Cour de cassation), "ca" (cours
d'appel), "tj" (tribunaux judiciaires), "tcom" (tribunaux de
commerce). Vide = toutes juridictions.
limit: nombre maximum de résultats (défaut 20, max 50)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| client_id | No | ||
| juridiction | No | ||
| client_secret | No | ||
| session_token | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing authentication requirements (two methods with security recommendations), rate limits (max 50 results), and default behavior (limit default 20). It also mentions the scope of courts covered. However, it doesn't describe the output format or pagination behavior, which would be helpful given the search nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose first, followed by authentication details and parameter explanations. Every sentence adds value, though the authentication section is somewhat lengthy. The structure is logical but could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, authentication requirements, search functionality) with no annotations but an output schema, the description is quite complete. It covers purpose, authentication, all parameters, and behavioral constraints. The output schema existence means return values don't need explanation, making this description adequate for the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining all 6 parameters in the Args section, adding meaning beyond the bare schema. It clarifies authentication alternatives, jurisdiction filter options with codes, and limit defaults/maximums. The only gap is that 'query' parameter semantics could be more detailed (e.g., search syntax).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches judicial jurisprudence from specific courts (Cour de cassation, cours d'appel, etc.), providing a specific verb ('Recherche') and resource. However, it doesn't explicitly distinguish this tool from sibling tools like 'search_judiciaire_libre' or 'search_conseil_etat', which appear to be related search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (searching judicial jurisprudence) and offers explicit authentication guidance with two methods, including recommendations. However, it doesn't specify when to choose this tool over sibling tools like 'search_judiciaire_libre' or 'search_conseil_etat', which limits full alternative guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_judiciaire_libreAInspect
Recherche dans la jurisprudence judiciaire SANS authentification.
Utilise l'index local des archives publiques DILA (Cour de cassation +
cours d'appel). Environ 217 000 décisions. Aucun compte requis.
Args:
query: mots-clés de recherche (ex: "licenciement abusif", "garde enfant")
juridiction: filtre optionnel : "cassation" (Cour de cassation) ou
"appel" (cours d'appel). Vide = toutes.
limit: nombre maximum de résultats (défaut 20)| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| juridiction | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates key behavioral traits: no authentication required, uses local DILA archives (Cour de cassation + cours d'appel), covers approximately 217,000 decisions, and has default result limiting. It doesn't mention rate limits, error conditions, or response format details, but provides substantial operational context for an unauthenticated search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured and front-loaded. The first sentence establishes the core purpose and key differentiator (no authentication). Subsequent sentences provide essential context about the data source and scope. The parameter explanations are clear and economical with helpful examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no annotations, but has output schema), the description is reasonably complete. It covers authentication status, data source, scope, and all parameters. With an output schema present, it doesn't need to explain return values. The main gap is lack of information about search capabilities (e.g., boolean operators, field-specific searches) or result format hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by explaining all three parameters. It defines 'query' as search keywords with examples, 'juridiction' as an optional filter with valid values ('cassation' or 'appel'), and 'limit' as maximum results with default value. This adds meaningful context beyond the bare schema, though it doesn't specify parameter constraints like length limits or format requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Recherche dans la jurisprudence judiciaire SANS authentification' (Search in judicial jurisprudence WITHOUT authentication). It specifies the resource (judicial jurisprudence from DILA archives), distinguishes from authenticated alternatives by emphasizing 'SANS authentification', and differentiates from siblings like 'search_judiciaire' (which likely requires authentication) by highlighting the no-account feature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for searching judicial jurisprudence without authentication, using the local DILA archive index with ~217,000 decisions. It implies this is the tool for unauthenticated searches, but doesn't explicitly state when NOT to use it or name specific alternatives (though siblings like 'search_judiciaire' exist). The guidance is helpful but not exhaustive about exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_juridictionAInspect
Recherche plein texte dans une juridiction administrative précise.
La base couvre le Conseil d'État, les 9 cours administratives d'appel et
les 40 tribunaux administratifs français (dont 9 outre-mer).
Args:
query: mots-clés de recherche
juridiction: code de la juridiction. Exemples :
- "CE" — Conseil d'État
- "CE-CAA" — Conseil d'État + cours administratives d'appel
- "TA69" — Tribunal administratif de Lyon
- "TA75" — Tribunal administratif de Paris
- "CAA69" — Cour administrative d'appel de Lyon
Appelle `list_juridictions` pour la liste complète.
limit: nombre maximum de résultats (défaut 20)
Returns:
Dict avec `juridiction_name`, `total` (hits), `returned`, et
`decisions` (liste d'objets avec id, ecli, formation, numero_dossier,
date_lecture, etc.). L'`id` peut être passé à `get_decision_text`.| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| juridiction | No | CE |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well: it discloses the database coverage scope (Conseil d'État, 9 appeals courts, 40 administrative courts), explains the return format in detail, mentions the id can be passed to get_decision_text, and specifies default values. It doesn't mention rate limits or authentication needs, but provides substantial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: it starts with the core purpose, then provides scope context, followed by parameter explanations with examples, and finally return format details. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no annotations, but has an output schema, the description provides excellent completeness: it explains the search scope, all parameters with examples, return format details, and integration with sibling tools. The output schema handles return values, so the description appropriately focuses on usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate fully. It does: it explains all 3 parameters with clear semantics, provides jurisdiction code examples with explanations, mentions the default limit of 20, and explains the required query parameter is for keywords. This adds significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a full-text search ('Recherche plein texte') within a specific administrative jurisdiction. It distinguishes from siblings by specifying it searches 'une juridiction administrative précise' rather than all courts or specific court types like search_all_cours_appel or search_conseil_etat.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use alternatives: it directs users to call list_juridictions for the complete jurisdiction list, mentions the default jurisdiction is 'CE' (Conseil d'État), and provides examples of jurisdiction codes. This helps users understand when to use this tool versus list_juridictions or other search tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!