Skip to main content
Glama
Ownership verified

Server Details

Search 18M+ legal documents across 110+ countries. Case law, legislation, and doctrine with semantic + keyword hybrid search. Supports tool discovery, multi-jurisdictional queries, citation resolution, and full document retrieval. Requested missing datasets can be fully indexed within 48h.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 7 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct, non-overlapping purpose: discovery (countries/sources/filters), retrieval by ID (get_document), citation resolution (resolve_reference), general search (search), and utility (report_source_issue). The distinction between search (semantic/keyword) and resolve_reference (exact citation matching) is explicitly clarified.

Naming Consistency5/5

All tools follow consistent snake_case with clear verb_noun patterns: discover_, get_, report_, resolve_, and search. Verbs are semantically appropriate (discover for exploration, get for retrieval, resolve for citations) and uniformly applied across the set.

Tool Count5/5

Seven tools is an ideal scope for this domain: three discovery tools for navigation, three retrieval tools covering different access patterns (ID lookup, search, citation resolution), and one utility for quality feedback. No redundant or filler tools.

Completeness4/5

Covers the full legal research workflow: discovery (countries → sources → filters) → retrieval (search or resolve) → fetch (get_document). Minor gap: no browse/list function to paginate documents by source without a search query, though search with broad filters may suffice. Reporting mechanism is present but lacks status checking.

Available Tools

7 tools
discover_countriesAInspect

List all available countries with their document counts and source counts.

Returns country codes (ISO 2-letter), number of case law / legislation / doctrine sources, and total document counts. Use this to find valid country codes before searching.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses return structure by enumerating specific fields (ISO codes, case law/legislation/doctrine source counts, total document counts), compensating for missing output schema. Does not mention side effects or rate limits, but sufficiently describes the data contract.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total with zero waste. Front-loaded with purpose statement, followed by return value specification and usage guideline. Every clause contributes necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter discovery tool with no output schema, the description fully specifies the resource domain (countries), the payload structure (ISO codes and counts), and the operational context (pre-search validation). No gaps remain for the agent to select or invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters exist (empty properties object), which per rules establishes a baseline of 4. Description correctly requires no parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('List') + resource ('countries') + scope ('with their document counts and source counts'). Explicitly states return format includes 'ISO 2-letter' codes, distinguishing it as a metadata/discovery tool rather than a search tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains explicit workflow guidance: 'Use this to find valid country codes before searching.' This establishes the temporal relationship to the sibling 'search' tool. Lacks explicit 'when not to use' exclusions, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_sourcesAInspect

List all data sources available for a specific country.

Returns source IDs, data types, court names, tiers, document counts,
and date ranges. Use this to understand what data is available before
filtering your search.

Args:
    country_code: ISO 2-letter country code, e.g. "FR", "DE", "EU".
ParametersJSON Schema
NameRequiredDescriptionDefault
country_codeYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and successfully discloses the return structure (source IDs, data types, court names, tiers, document counts, date ranges). However, it omits mutation safety (implied read-only by 'List' but not stated), error behaviors (e.g., invalid country codes), or operational traits like caching/rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear information hierarchy: purpose → return values → usage guidance → parameter details. Every sentence earns its place. The Args section is necessary given the zero schema coverage, preventing this from being a 5 (ideally schema would self-document).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter discovery tool without output schema, the description adequately covers inputs (via Args) and outputs (via return value description). Minor gap: no mention of error handling for invalid country codes or whether the tool requires specific permissions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Outstanding compensation for 0% schema coverage. The Args section provides the critical ISO 2-letter format specification and concrete examples ('FR', 'DE', 'EU') that the JSON schema completely lacks. This adds essential semantic meaning beyond the bare 'string' type in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent purpose statement with specific verb 'List', clear resource 'data sources', and scope 'for a specific country'. The second sentence distinguishes it by detailing the specific return fields (source IDs, court names, tiers, etc.), clearly positioning it as a discovery/directory tool rather than a search tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance: 'Use this to understand what data is available before filtering your search.' This clearly indicates it should be used as a precursor to searching. Would be perfect if it explicitly named the sibling tool (e.g., 'search') to use afterward, but the guidance is clear enough to infer the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_documentAInspect

Retrieve a full legal document by its source and source_id.

Returns the complete text, metadata, and all available fields.
Use source and source_id values from search results.

Args:
    source: Source identifier, e.g. "FR/Judilibre", "DE/BVerfG".
    source_id: Document identifier within the source.
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYes
source_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context by describing the return payload ('complete text, metadata, and all available fields') since no output schema exists. However, it omits other critical behavioral traits: error handling (e.g., 404 if source_id invalid), rate limits, caching behavior, or whether the operation is idempotent/safe.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description follows a logical, front-loaded structure: action (retrieve), output (returns...), workflow (use values from...), and parameter specifications (Args). There is minor redundancy between 'Retrieve a full legal document' and 'Returns the complete text,' but overall it is appropriately sized with no extraneous filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the zero schema coverage and lack of output schema, the description is adequately complete. It documents both parameters, describes the return structure, and provides workflow context (dependency on search results). It could be improved by noting error conditions or authentication requirements, but it covers the essential information needed for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (neither parameter has a description in the JSON schema), so the description fully compensates. It provides clear semantics for both: 'source' includes concrete examples ('FR/Judilibre', 'DE/BVerfG') showing the expected format, and 'source_id' is defined as 'Document identifier within the source.' This sufficiently documents the two parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Retrieve') and resource ('full legal document'), clearly stating the tool's function. It implicitly distinguishes from the sibling 'search' tool by instructing users to 'Use source and source_id values from search results,' establishing a clear retrieval-vs-discovery relationship without explicitly naming the alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite workflow: 'Use source and source_id values from search results.' This guides the agent to use the 'search' tool first and then feed those specific identifiers into this tool. While it doesn't explicitly state negative constraints (when NOT to use), the dependency on search results provides sufficient context for correct invocation sequencing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_filtersAInspect

Get available filter values for a specific data source.

Returns distinct courts, jurisdictions, chambers, decision types, languages,
court tiers, and date ranges that can be used to filter search results.

Args:
    source: Source identifier, e.g. "FR/Judilibre", "AT/RIS".
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It effectively describes what gets returned (distinct filter values across multiple categories), acting as a proxy for the missing output schema. However, it lacks details on idempotency, caching behavior, rate limits, or error conditions (e.g., invalid source).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately compact with a clear front-loaded purpose statement. The structure separates the return value description from the parameter documentation. The 'Args:' formatting is slightly informal but efficient. No sentences appear redundant or wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool, coverage is adequate but incomplete. It provides examples for the parameter and lists return categories, but fails to mention that valid source identifiers can be discovered via 'discover_sources'—a critical missing link for the user workflow. Without an output schema, the return structure could also be better characterized.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage (only a title 'Source'). The description compensates by providing the Args section with critical semantic context: 'Source identifier' and concrete examples ('FR/Judilibre', 'AT/RIS'), clarifying the expected format of the identifier.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'available filter values for a specific data source' and lists the specific categories returned (courts, jurisdictions, etc.). However, it does not explicitly differentiate from sibling 'discover_sources' which could confuse the relationship between listing sources vs getting filters for a source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating the filters 'can be used to filter search results', suggesting a relationship with the 'search' sibling. However, it lacks explicit workflow guidance such as 'call this before search to obtain valid filter values' or contraindications for when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_source_issueAInspect

Report an issue with a data source.

Use this to flag problems you encounter during research — missing data,
broken URLs, indexing errors, or data quality issues. Reports are reviewed
by the platform maintainer.

Args:
    source: Source identifier, e.g. "FR/Judilibre", "AT/RIS".
    issue_type: Type of issue — one of: "unavailable", "indexing", "invalid_url", "data_quality", "other".
    description: Free-text description of the problem.
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYes
issue_typeYes
descriptionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Lacking annotations, the description carries the full burden. It adds value by stating 'Reports are reviewed by the platform maintainer,' explaining the post-submission workflow. However, it omits mutation details, return values, or side effects that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose first, then usage guidelines, behavioral note, and parameter details. Each sentence earns its place. The 'Args:' section is slightly verbose but clearly organized and appropriate for the zero-coverage schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 3 simple parameters, no output schema, and no annotations, the description covers essential operational context. Could improve by noting expected return value (e.g., confirmation ID) given the absence of an output schema, but sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting all 3 parameters: 'source' includes format examples ('FR/Judilibre'), 'issue_type' enumerates allowed values, and 'description' explains free-text purpose. Essential semantic information missing from schema is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states specific action ('Report') and resource ('issue with a data source'), clearly distinguishing from retrieval siblings like 'get_document' and 'search'. The opening sentence is precise and immediately actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists specific trigger scenarios ('missing data, broken URLs, indexing errors, or data quality issues') and context ('during research'). Implies when to use versus normal data retrieval, though does not explicitly name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_referenceAInspect

Resolve a loose legal reference to the exact document(s).

Given an informal citation like "art. 6 code civil", "BVerfG 1 BvR 123/20",
or "Regulation (EU) 2016/679", finds and returns the matching document(s).
This is NOT a search — it resolves a specific citation to the exact record.

Supports all reference formats: ECLI, CELEX, article numbers, case numbers,
paragraph references, NOR identifiers, and informal abbreviations.

Args:
    reference: Legal reference string (e.g., "art. 6 code civil", "ECLI:FR:CCASS:2006:CO00559").
    hint_country: Optional ISO country code to narrow resolution (e.g., "FR").
    hint_type: Optional type hint: "legislation" or "case_law".
ParametersJSON Schema
NameRequiredDescriptionDefault
hint_typeNo
referenceYes
hint_countryNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses that it returns exact record(s) and supports multiple identifier formats. Missing disclosure on error behavior (no match found?), ambiguity handling (multiple matches?), destructiveness, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose first, examples second, sibling distinction third, supported formats fourth, and Args section last. Every sentence earns its place; no boilerplate. Front-loaded with the critical 'NOT a search' distinction early.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Mentions it 'finds and returns the matching document(s)' but lacks output schema and provides no details on return structure (IDs vs full documents?), pagination for multiple matches, or error response format. Given 0% schema coverage and no annotations, more behavioral context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Fully compensates for 0% schema description coverage. Defines all three parameters in the Args block with types ('Legal reference string', 'Optional ISO country code'), constraints, examples ('FR', 'ECLI:FR:CCASS:2006:CO00559'), and allowed values ('legislation' or 'case_law').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb+resource ('Resolve a loose legal reference to the exact document(s)') and immediately distinguishes from the sibling 'search' tool by stating 'This is NOT a search — it resolves a specific citation to the exact record.' Lists concrete examples (art. 6 code civil, BVerfG, CELEX) that clarify scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clearly contrasts its purpose against searching via 'This is NOT a search' given the sibling 'search' tool exists. Provides multiple input examples showing citation formats. Does not explicitly name the sibling 'search' as the alternative for general queries, though the distinction is clear from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources