Skip to main content
Glama

Server Details

Search 18M+ legal documents worldwide — case law, legislation, and doctrine across 110+ countries.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
worldwidelaw/legal-sources
GitHub Stars
104

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 7 of 7 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. discover_countries lists countries, discover_sources lists sources per country, get_document retrieves a specific document, get_filters provides filter values, report_source_issue reports problems, resolve_reference resolves citations, and search performs document searches. The descriptions make the boundaries unambiguous.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with clear, descriptive verbs (discover, get, report, resolve, search). The naming is uniform throughout, using snake_case exclusively without any deviations or mixed conventions.

Tool Count5/5

With 7 tools, this server is well-scoped for legal data exploration. Each tool serves a specific function in the workflow (discovery, retrieval, filtering, reporting, resolution, search), and none feel redundant or missing given the domain of legal document access and research.

Completeness5/5

The toolset provides complete coverage for legal data research: discovery (countries and sources), retrieval (documents and filters), problem reporting, citation resolution, and advanced search. There are no obvious gaps; agents can navigate from discovery to detailed analysis without dead ends.

Available Tools

7 tools
discover_countriesAInspect

List all available countries with their document counts and source counts.

Returns country codes (ISO 2-letter), number of case law / legislation / doctrine sources, and total document counts. Use this to find valid country codes before searching.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behavioral traits: it returns country codes (ISO 2-letter format), document/source counts, and serves as a lookup tool. However, it doesn't mention potential limitations like rate limits, pagination, or freshness of data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose and followed by usage guidance. Every sentence adds value: the first defines the tool's function, and the second provides critical context for when to invoke it.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description is largely complete—it explains the purpose, output format, and usage context. A minor gap is the lack of explicit mention of error cases or data freshness, but overall it provides sufficient guidance for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on output semantics, explaining what data is returned (country codes, counts) without redundant parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all available countries') and the resources involved ('countries with their document counts and source counts'). It distinguishes this tool from siblings like 'discover_sources' by focusing on country-level metadata rather than source-level details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'Use this to find valid country codes before searching.' This tells the agent when to use this tool (as a prerequisite for country-specific operations) and implies alternatives (e.g., use 'search' after obtaining codes).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_sourcesAInspect

List all data sources available for a specific country.

Returns source IDs, data types, court names, tiers, document counts,
and date ranges. Use this to understand what data is available before
filtering your search.

Args:
    country_code: ISO 2-letter country code, e.g. "FR", "DE", "EU".
ParametersJSON Schema
NameRequiredDescriptionDefault
country_codeYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the return format (source IDs, data types, etc.) and the purpose (discovery before filtering), which adds useful context. However, it doesn't disclose behavioral traits like rate limits, authentication needs, pagination, or error handling, leaving gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, the second details the return values, the third provides usage guidelines, and the fourth explains the parameter. Every sentence earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete: it covers purpose, usage, return values, and parameter semantics. However, it lacks details on behavioral aspects like rate limits or error handling, which would be beneficial even for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that 'country_code' is an 'ISO 2-letter country code' with examples ('FR', 'DE', 'EU'), clarifying the parameter's format and semantics that the schema alone does not provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all data sources available for a specific country'), and distinguishes from sibling tools by focusing on discovery rather than searching, filtering, or document retrieval. It specifies the exact information returned (source IDs, data types, court names, tiers, document counts, and date ranges).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool: 'Use this to understand what data is available before filtering your search.' This provides clear guidance that this is a preliminary discovery step, distinguishing it from search/filtering tools like 'search' or 'get_filters' among the siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_documentBInspect

Retrieve a full legal document by its source and source_id.

Returns the complete text, metadata, and all available fields.
Use source and source_id values from search results.

Args:
    source: Source identifier, e.g. "FR/Judilibre", "DE/BVerfG".
    source_id: Document identifier within the source.
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYes
source_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool returns ('complete text, metadata, and all available fields'), which is helpful. However, it doesn't mention important behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, error conditions, or response format details. The description adds some value but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. The first sentence states the core purpose, followed by return value information, usage guidance, and parameter explanations. Each sentence adds value without redundancy. The 'Args:' section could be integrated more smoothly, but overall it's efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, no annotations, and no output schema, the description provides adequate basic information about purpose, parameters, and returns. However, it lacks details about the response structure (beyond mentioning 'text, metadata, and fields'), error handling, and how it differs from sibling tools. Given the complexity of legal document retrieval, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for both parameters beyond the schema's minimal coverage (0%). It explains that 'source' is a source identifier with examples ('FR/Judilibre', 'DE/BVerfG'), and 'source_id' is a document identifier within that source. It also clarifies that these values should come from search results. This significantly compensates for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve a full legal document by its source and source_id.' It specifies the verb (retrieve), resource (legal document), and key identifiers. However, it doesn't explicitly differentiate from sibling tools like 'search' or 'resolve_reference', which might also retrieve documents but with different approaches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some guidance: 'Use source and source_id values from search results.' This implies a workflow where users first search to obtain identifiers, then use this tool for full retrieval. However, it doesn't explicitly state when to use this tool versus alternatives like 'search' (which might return summaries) or 'resolve_reference', nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_filtersAInspect

Get available filter values for a specific data source.

Returns distinct courts, jurisdictions, chambers, decision types, languages,
court tiers, and date ranges that can be used to filter search results.

Args:
    source: Source identifier, e.g. "FR/Judilibre", "AT/RIS".
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool returns distinct values (e.g., courts, jurisdictions) and specifies the return format, but it doesn't mention behavioral traits like error handling, rate limits, or authentication needs. It adds some context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the purpose, followed by details on returns and parameters. Every sentence adds value without waste, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description is mostly complete: it explains the purpose, returns, and parameter semantics. However, it lacks output details (e.g., format of returned values) and behavioral context, leaving minor gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 1 parameter with 0% description coverage, so the description must compensate. It adds meaning by explaining the 'source' parameter as a source identifier with examples ('FR/Judilibre', 'AT/RIS'), which clarifies usage beyond the bare schema. However, it doesn't detail all possible values or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('available filter values for a specific data source'), and it distinguishes from siblings by focusing on filter metadata rather than document retrieval (get_document), search (search), or source discovery (discover_sources).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing filter values for a data source, but it doesn't explicitly state when to use this tool versus alternatives like search (which might include filtering) or discover_sources (which lists sources). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_source_issueAInspect

Report an issue with a data source.

Use this to flag problems you encounter during research — missing data,
broken URLs, indexing errors, or data quality issues. Reports are reviewed
by the platform maintainer. Requires authentication (free — does not
count against your usage quota).

Args:
    source: Source identifier, e.g. "FR/Judilibre", "AT/RIS".
    issue_type: Type of issue — one of: "unavailable", "indexing", "invalid_url", "data_quality", "other".
    description: Free-text description of the problem.
ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYes
issue_typeYes
descriptionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool requires authentication (though it's free and doesn't count against usage quotas), and it explains the outcome (reports are reviewed by the platform maintainer). However, it doesn't mention potential response times, confirmation mechanisms, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidelines, behavioral details, and parameter explanations. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is largely complete. It covers purpose, usage, authentication, and parameter semantics. However, it lacks details on the return value or confirmation of submission, which would be helpful for a reporting tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully. It successfully adds meaning beyond the bare schema by explaining each parameter: 'source' is a source identifier with examples ('FR/Judilibre', 'AT/RIS'), 'issue_type' has specific allowed values ('unavailable', 'indexing', etc.), and 'description' is a free-text field for problem details. This provides essential context missing from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Report an issue') and resource ('with a data source'), distinguishing it from sibling tools like discover_sources or search. It provides concrete examples of issues to report (missing data, broken URLs, etc.), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use this to flag problems you encounter during research') and provides specific examples of applicable scenarios (missing data, broken URLs, indexing errors, data quality issues). It also mentions that reports are reviewed by the platform maintainer, clarifying the workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_referenceAInspect

Resolve a loose legal reference to the exact document(s).

Given an informal citation like "art. 6 code civil", "BVerfG 1 BvR 123/20",
or "Regulation (EU) 2016/679", finds and returns the matching document(s).
This is NOT a search — it resolves a specific citation to the exact record.

Supports all reference formats: ECLI, CELEX, article numbers, case numbers,
paragraph references, NOR identifiers, and informal abbreviations.

Args:
    reference: Legal reference string (e.g., "art. 6 code civil", "ECLI:FR:CCASS:2006:CO00559").
    hint_country: Optional ISO country code to narrow resolution (e.g., "FR").
    hint_type: Optional type hint: "legislation" or "case_law".
ParametersJSON Schema
NameRequiredDescriptionDefault
hint_typeNo
referenceYes
hint_countryNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the core functionality well but lacks details on permissions, rate limits, error handling, or output format. The statement about supporting 'all reference formats' provides useful context, but more behavioral traits would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose in the first sentence, key distinction in the second, and parameter details in a structured Args section. Every sentence earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, no annotations, and no output schema, the description is mostly complete but could benefit from more details about the return format or error conditions. It covers purpose, usage, and parameters well, making it adequate for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and 3 parameters, the description compensates well by explaining all parameters in the Args section with examples and constraints. It adds meaning beyond the schema by clarifying what 'reference' accepts, what 'hint_country' and 'hint_type' do, and providing format examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('resolve', 'finds and returns') and resources ('loose legal reference', 'exact document(s)'), distinguishing it from sibling tools like 'search' by explicitly stating 'This is NOT a search — it resolves a specific citation to the exact record.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance by stating when to use this tool ('resolves a specific citation to the exact record') and when not to use it ('NOT a search'), with clear alternatives implied through sibling tool names like 'search' and 'get_document'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.