Skip to main content
Glama
metaneutrons

German Legal MCP Server

by metaneutrons

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose, with no ambiguity between them. The tools are organized by data source (arxiv, dip, eul, icu, legis, rii) and action (get, search, states, toc), making it easy for an agent to select the correct tool for retrieving or searching specific legal documents, legislation, or court decisions.

    Naming Consistency4/5

    The naming is mostly consistent with a source:action pattern (e.g., arxiv:get, dip:search), but there are minor deviations like eul:get_document and rii:get_decision instead of eul:get and rii:get. These deviations are minor and do not significantly hinder readability or predictability.

    Tool Count5/5

    With 16 tools, the server is well-scoped for its purpose of accessing German and EU legal resources. Each tool serves a specific function across multiple domains (e.g., legislation, court decisions, parliamentary documents), and the count is appropriate for comprehensive coverage without being overwhelming.

    Completeness5/5

    The tool set provides complete coverage for the domain, including retrieval and search capabilities for various legal sources (arXiv, Bundestag documents, EU legislation, CJEU decisions, German legislation, court decisions). There are no obvious gaps; agents can perform full CRUD-like operations (retrieve and search) across all relevant legal data types.

  • Average 4.2/5 across 16 of 16 tools scored. Lowest: 3.6/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.2.2

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 16 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full disclosure burden. It adds valuable behavioral context by specifying the return format ('Gesetzgebungsvorgänge with status and linked Drucksachen-Nummern'), which compensates for the missing output schema. However, it omits safety characteristics (read-only status), rate limits, or pagination behavior that would be necessary for a complete behavioral profile.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three efficiently structured sentences: purpose declaration, return value specification, and use case guidance. No redundancy or filler content; every sentence delivers distinct information necessary for tool selection.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description appropriately compensates by describing what the tool returns. It handles the domain complexity (German legislative procedures) adequately, though it assumes familiarity with the DIP acronym and could explicitly state the read-only nature of the operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the structured documentation already fully explains all 6 parameters (query, vorgangstyp, wahlperiode, dates, limit). The description adds no additional parameter semantics, examples, or usage notes beyond what the schema provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool searches 'legislative processes (Vorgänge) in DIP' with specific German parliamentary terminology, and mentions it returns 'Gesetzgebungsvorgänge'. It distinguishes from dip:search_plenarprotokoll by focusing on legislative processes rather than parliamentary protocols, but does not explicitly differentiate from the generic dip:search sibling.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides use cases ('Useful for tracking a law through the legislative process or finding all related documents') which imply when to use the tool. However, it lacks explicit guidance on when NOT to use it or when to prefer siblings like dip:search or dip:get over this specialized search.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It adds valuable behavioral context by disclosing the data source (EUR-Lex SPARQL endpoint) and return format (CELEX numbers, titles, dates). However, it omits other behavioral traits like rate limiting, caching behavior, or error handling that would be useful given the lack of annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero waste. The first sentence establishes the action and mechanism; the second documents the return values. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description appropriately compensates by explicitly stating the return fields (CELEX numbers, titles, dates). It successfully covers the tool's purpose for a 4-parameter search operation, though mentioning the relationship to 'eul:get_document' (which likely consumes CELEX numbers) would further complete the workflow context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, establishing a baseline of 3. The description mentions 'directives, regulations, treaties' which aligns with the resource_type enum, but does not add significant semantic detail beyond what the schema already provides (e.g., no examples of query syntax or CELEX number format).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool searches 'EU legislation (directives, regulations, treaties)' using a specific verb and resource. It implicitly distinguishes from sibling 'eul:get_document' by specifying that it returns metadata (CELEX numbers, titles, dates) rather than full documents, though explicit differentiation would strengthen this further.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage context by documenting the return format (CELEX numbers, titles, dates), suggesting this is for discovery/metadata retrieval rather than full-text retrieval. However, it lacks explicit when-to-use guidance or comparison with alternatives like 'eul:get_document'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Since no annotations exist, description carries full burden. It adds valuable matching logic ('search term appears in the debate text') and return value disclosure ('Returns protocols'). However, it omits safety confirmation (read-only status), rate limits, or pagination behavior that annotations would typically provide.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first defines action and resource, second defines return behavior. Front-loaded with the most critical information (what is being searched) and appropriate length for the tool complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With 100% schema coverage and no output schema, the description adequately covers the tool's purpose and return type. Minor deduction for not explicitly stating read-only/safety characteristics given the absence of annotations, though this is implied by 'Search' and 'Returns'.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing baseline 3. Description adds 'full text search' context for the query parameter but does not elaborate on specific parameter semantics (e.g., that 'herausgeber' filters by chamber) beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent clarity: specific verb 'Search' + resource 'Plenarprotokolle (parliamentary debate transcripts)' + method 'full text search'. Distinguishes from sibling tools like dip:search and dip:search_vorgang by explicitly specifying the parliamentary debate transcript domain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implied usage context (use when searching debate text), but lacks explicit guidance on when to choose this over dip:search (general) or dip:search_vorgang (legislative procedures). No exclusions or prerequisites mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the return structure (list with case numbers, ECLI, dates, document IDs), clarifying this returns metadata rather than full documents. It does not explicitly state read-only/safe status or error behaviors, though these are reasonably implied by the search verb.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: the first establishes scope and domain, the second specifies return values. Every word earns its place; no redundancy or filler present.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the straightforward 3-parameter search operation with primitive types and no output schema, the description adequately covers purpose and return values. It could be improved by explicitly mentioning the companion retrieval tool (icu:get_document) to complete the usage context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing detailed descriptions for query, language, and limit. The tool description does not add parameter-specific semantics beyond this (e.g., example search strategies, valid language code constraints), meriting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (search), the exact resource (decisions and opinions of the CJEU), and the source system (InfoCuria). It distinguishes from siblings like arxiv (academic), dip (German parliamentary), and legis (legislation) by specifying the Court of Justice context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'document IDs for retrieval' implies this tool finds documents rather than returning full text, suggesting a separate retrieval step. However, it does not explicitly name the sibling tool (icu:get_document) or provide explicit when-to-use guidance versus other legal search tools like eul:search or rii:search.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses return content ('extracted text including Gesetzesbegründung') and critical behavioral branch (save_path saves to file 'instead of returning content'). Omits safety profile, rate limits, or error conditions, but 'Retrieve' implies read-only operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences, zero waste. Front-loaded with core action (retrieve by Dokumentnummer), followed by output description, then optional parameter guidance. Every clause provides distinct information not redundant with schema or name.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a 3-parameter retrieval tool with simple schema. Covers document specificity, output format, and optional behaviors. No output schema exists, but description adequately explains return values. Minor gap: doesn't mention error handling for invalid dokumentnummer formats.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds value by explaining the functional purpose of optional parameters: section enables 'partial content' extraction and save_path triggers file persistence rather than return. Elevates above baseline by clarifying when to use these options.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb 'Retrieve' with resource 'Bundestagsdrucksache' and identifier method 'Dokumentnummer'. The term 'Bundestagsdrucksache' clearly distinguishes this from siblings handling arxiv papers, EU law, or court decisions. Includes concrete example format '19/27426'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear guidance on optional parameters: 'Use `section` for partial content' and save_path 'to save to file.' Lacks explicit contrast with dip:search (when to search vs. get by ID), though this is somewhat implied by the 'get' vs 'search' naming convention.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It successfully discloses jurisdiction-specific ID format behaviors (BUND vs Länder patterns) and implies save_path determines output destination. Missing: return content format, error behavior for invalid IDs, and rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. Front-loaded with purpose ('Retrieve...'), followed immediately by jurisdiction-specific implementation details. Every word earns its place; German legal abbreviations (BUND, Länder) are used correctly without redundant explanation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 100% schema coverage and no output schema, the description adequately covers the complexity of German legal citation formats (mapping BGB/GG/StGB to ID patterns). Minor gap: does not mention legis:toc sibling for table-of-contents retrieval or describe the actual document format returned.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with good descriptions. Description adds valuable context beyond schema: the 'stgb/§ 242' example (absent from schema) and explicit note that Länder formats 'vary by state', helping users understand the ID construction complexity.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Retrieve') + resource ('law/norm from German federal or state legislation'). Effectively distinguishes from siblings like arxiv:get (academic papers), eul:get_document (EU law), and dip:get (parliamentary documents) by specifying German legislation domain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides implicit workflow guidance by stating Länder IDs must come from 'legis:search results', suggesting users should search first for state laws but can directly construct IDs for federal (BUND) laws. Lacks explicit 'when not to use' or direct reference to legis:search as the alternative.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Without annotations, the description carries full burden. It discloses return behavior ('Returns results with IDs') and scope constraints ('BUND does not support search'), but lacks explicit read-only classification, authentication requirements, rate limits, or error handling details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four tightly constructed sentences with zero waste. Front-loaded with purpose, followed by sibling relationship, scope coverage, and limitation. Every sentence delivers distinct value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 3-parameter search tool, the description adequately covers purpose, sibling workflow, geographic scope, and federal limitations. Minor gap in not detailing the exact result structure beyond 'IDs', though this is acceptable without a formal output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage with examples. The description reinforces the keyword search nature and BUND exclusion, but does not add semantic meaning beyond what the schema already provides (baseline 3 for high coverage).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (search by keyword), resource (German state legislation/Landesrecht), and scope (all 16 Bundesländer). Effectively distinguishes from sibling 'legis:get' by noting this returns IDs for retrieval via that tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-not-to-use guidance ('BUND does not support search') and names the correct alternative ('use legis:get directly'). Also clarifies the retrieval workflow ('Returns results with IDs for retrieval via legis:get').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully sets expectations by specifying the exact count (17) and revealing that 'backends' are included in the response, which helps the agent understand this returns metadata rather than just names.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, front-loaded sentence with zero waste. Every phrase earns its place: the action ('List'), scope ('all 17'), domain ('German jurisdictions'), clarification ('BUND + 16 Bundesländer'), and return value detail ('with their backends').

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description adequately compensates by indicating the return value includes both jurisdictions and their backends. For a simple enumeration tool, this is sufficient context, though explicitly mentioning the return format (e.g., array of objects) would strengthen it further.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, which per the scoring guidelines establishes a baseline of 4. The description appropriately indicates no filtering is possible ('List all'), confirming the parameter-less nature of the call.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('List') with a clear resource ('German jurisdictions') and precise scope ('17... BUND + 16 Bundesländer'). It effectively distinguishes this discovery tool from content-retrieval siblings like legis:get and legis:search by focusing on jurisdiction enumeration rather than document access.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description implies this is a discovery tool for available jurisdictions, it lacks explicit guidance on when to use it (e.g., 'Call this first to obtain valid jurisdiction codes before using legis:get') or when not to use it. The relationship to sibling tools is clear from naming but not stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description must carry the full behavioral burden. It adds valuable context about performance ('Much lighter') and output structure ('compact list'), but omits safety characteristics (read-only status), rate limits, caching behavior, or error conditions that would be necessary for a complete behavioral picture.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is tightly structured with zero waste: purpose/output (sentence 1), sibling comparison (sentence 2), and parameter semantics (sentences 3-4). Every clause delivers actionable information without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the domain complexity (17 jurisdictions, 5 parameters, no output schema), the description adequately covers the critical jurisdiction distinction and explains the lightweight return format. It could be improved by briefly describing the return structure or pagination, but successfully addresses the primary complexity (BUND vs Länder ID sourcing).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage (baseline 3), the description adds significant semantic value by explaining the jurisdiction-specific sourcing logic for 'id'—providing concrete examples for BUND ('bgb', 'stgb') and cross-referencing 'legis:search' for Länder. This domain context is essential for correct invocation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb-resource pair ('Get table of contents for a law') and clarifies the output format ('compact list of section numbers and headings'). It explicitly distinguishes from sibling 'legis:get' by noting this is 'Much lighter... for navigating large laws', clearly scoping its purpose.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit comparative guidance by contrasting with 'legis:get' (lighter weight for navigation vs presumably heavier full retrieval). It also gives jurisdiction-specific instructions for the 'id' parameter (BUND abbreviations vs Länder IDs from legis:search), effectively guiding when to use which sourcing pattern. However, it lacks explicit 'when not to use' negative constraints.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses critical behavioral traits: title-only matching (not full text), and specific return values (Dokumentnummer, title, type, date, PDF URL). Does not mention rate limits or pagination mechanics, but covers essential operational constraints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient three-sentence structure. Front-loads core functionality, immediately clarifies limitation (title vs full text), specifies return format, and concludes with sibling reference. Zero redundancy; every clause provides distinct value.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, but description compensates by enumerating exact metadata fields returned. Given 7 well-documented parameters and clear sibling relationships, description is complete for tool selection. Minor gap: does not clarify if results are paginated or how limit interacts with total results.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage, establishing baseline 3. Description reinforces the 'query' parameter's scope (title matching) but does not add syntax details, format examples, or semantic elaboration beyond what the schema already provides for the 7 parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (Search) + resource (Bundestagsdrucksachen/parliamentary documents) + API context (DIP). Explicitly distinguishes scope from siblings by clarifying it matches 'document title — not full text' and contrasts with dip:get for full text retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: 'Use dip:get to retrieve full text... of a specific Drucksache.' Clearly defines when to use this tool (searching/metadata) versus the alternative (retrieving full content), effectively establishing the tool's position in a two-step workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full burden and succeeds well: discloses Markdown output format, specific Randnummern syntax ([Rn. 5]{.rn}), and the dual-mode behavior (return content vs. file save via save_path). Minor gap: no mention of error behavior when case_id not found.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, zero waste. Front-loaded with core purpose, followed by output format specifics, then parameter usage guidance. Every sentence delivers unique information not redundant with schema or title.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a legal document retrieval tool with 4 parameters and no output schema, description adequately compensates by explaining return format and content structure. Absence of error handling documentation prevents a 5, but coverage is sufficient for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Though schema coverage is 100% (baseline 3), description adds crucial usage examples for section ('Rn 5-12', heading text, 'lines:100-200') and clarifies save_path's side effect ('instead of returning content'). These behavioral details exceed raw schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'Retrieve a CJEU decision or opinion from InfoCuria' provides exact verb, resource type, and source system. Distinguishes clearly from siblings like eul:get_document (EU legislation) and arxiv:get (academic papers) by specifying CJEU legal documents.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Clear input requirements (case number or CELEX number) implicitly signal this is for known documents versus searching. However, it doesn't explicitly reference sibling icu:search for discovery workflows or state prerequisites like 'use search first if case_id unknown'.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses output format (Markdown) and specific metadata fields returned (court, date, file number, ECLI). Missing error handling (e.g., invalid ID behavior) but covers core return structure well.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. First sentence establishes core function and output; second provides critical source-specific guidance. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for moderate complexity (5 params, source-conditional behavior). Compensates for lack of output schema by describing return format. Could explicitly note that 'part' and 'section' are source-dependent, though schema covers this.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3). Description adds valuable domain context linking 'BY' source to 'gesetze-bayern.de' and reinforces the format pattern, which helps the agent understand the parameter intent beyond the schema's technical enum values.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb ('Retrieve') + resource ('court decision') + key constraint ('by doc ID'). Clearly distinguishes from sibling 'rii:search' (search vs retrieval) and other domain tools (arxiv, legis, etc.).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides specific guidance for the 'BY' source (gesetze-bayern.de) and ID format. Implicitly suggests workflow (doc ID retrieval) but does not explicitly reference 'rii:search' as the prerequisite step or state when NOT to use the tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral disclosure burden. It effectively communicates the conditional output formats (metadata vs HTML full text vs PDF link), the date-based availability limitation for HTML content, and the fallback behavior for older papers. Lacks mention of rate limits or error conditions, but covers the critical behavioral variations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficiently structured sentences with zero redundancy. Front-loaded with core functionality (ID retrieval), followed by conditional behavior and constraints. Every clause conveys necessary information about output variations or availability limits.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 3-parameter tool with simple flat schema and no output schema, the description is comprehensive. It compensates for the missing output schema by explicitly documenting the three possible return variations (metadata/abstract, HTML full text, or PDF link) and the conditions that trigger each.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage (baseline 3), the description adds value by embedding the ID example in context, reinforcing the trigger relationship between optional parameters and full-text mode, and—most importantly—providing the temporal constraint (~2024+) that affects parameter utility. This contextual limitation is essential for correct parameter usage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (Retrieve) and resource (arXiv paper by ID), with an example ID format. It implicitly distinguishes from sibling arxiv:search by emphasizing ID-based retrieval versus search queries, and differentiates from unrelated domain siblings (dip, legis, etc.) by explicitly naming the arXiv domain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear conditional guidance: default returns metadata+abstract, while section/save_path parameters trigger full text fetch. Explains availability constraints (~2024+ papers only for HTML) and fallback behavior for older papers. Does not explicitly name arxiv:search as the alternative for finding IDs, though the distinction is clear from context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It compensates by disclosing return structure ('Returns metadata: arXiv ID, title, authors...') which is crucial given the lack of output schema. However, it omits explicit safety confirmation (read-only nature), rate limits, or error behaviors.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two tightly constructed sentences with zero waste. First sentence covers purpose and return values; second provides sibling differentiation. Information is front-loaded and appropriately sized for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 4-parameter search tool with 100% schema coverage but no output schema, the description is complete. It documents the return payload in lieu of an output schema and clarifies the relationship to sibling 'arxiv:get', leaving no critical gaps for agent operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing baseline 3. Description mentions searching 'by keywords, author, or category' which aligns with query field prefixes documented in schema, but adds no additional syntax details, examples, or semantic constraints beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description states specific verb ('Search') and resource ('arXiv preprints') with clear scope ('by keywords, author, or category'). It effectively distinguishes from sibling tool 'arxiv:get' by stating the latter is for retrieving full text, preventing confusion between search and retrieval operations.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly provides alternative tool guidance: 'Use arxiv:get with the arXiv ID to retrieve full text.' This clearly signals when to use the sibling instead, implying this tool is for discovery/metadata only, not full-text retrieval.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description adequately discloses key behaviors: it specifies the return format ('full text in Markdown'), explains the mutually exclusive output modes (return content vs 'save to file'), and identifies the external data source (EUR-Lex). Lacks error handling or rate limit disclosure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three tightly constructed sentences with zero waste. Front-loaded with core purpose, followed by optional parameter guidance. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description compensates by stating the return format ('Markdown') and content scope ('full text'). All four parameters are addressed implicitly or explicitly, and the domain-specific CELEX identifier is thoroughly contextualized.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While schema coverage is 100%, the description adds significant value by providing concrete syntax examples for the 'section' parameter ('Art. 5', 'lines:100-200') and clarifying the behavioral implication of 'save_path' ('instead of returning content').

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Retrieve') + resource ('EU legislation from EUR-Lex') + identifier ('CELEX number'), clearly distinguishing it from siblings like 'arxiv:get' or 'legis:get' through the EUR-Lex/CELEX specificity.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides concrete usage context through real-world examples (GDPR, InfoSoc) and explains when to use specific parameters ('Use `section` for partial content'). However, it does not explicitly differentiate from the sibling 'eul:search' tool for cases when the user lacks a CELEX number.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the default source behavior, enumerates the specific court abbreviations covered by each source (BVerfG, BGH, etc.), and clarifies the return format (list with metadata and doc IDs). It omits rate limits or auth requirements, but the core behavioral contract is clear.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences efficiently cover: (1) purpose, (2) default source with federal court details, and (3) alternative source with Bavarian courts and return value. Every sentence adds unique information; no redundancy or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of an output schema, the description appropriately discloses the return structure ('list of decisions with metadata and doc IDs'). Combined with the detailed court abbreviations for the German legal domain, the description provides complete context for a 3-parameter search tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the input schema has 100% description coverage (baseline 3), the description adds significant domain-specific value by mapping the abstract 'federal' and 'Bavarian' source values to their constituent court types (BVerfG, BGH, AG, LG, etc.), which helps the agent understand the legal domain scope beyond the schema's technical enum values.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Search') and resource ('court decisions'), immediately clarifying scope. It distinguishes from sibling 'rii:get_decision' by noting the return value includes 'doc IDs for retrieval,' implying this tool finds documents while the sibling retrieves them.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context for the 'source' parameter (federal vs. Bavarian courts) and implies when to use this tool versus retrieval by noting it returns IDs rather than full documents. However, it does not explicitly name 'rii:get_decision' as the alternative for full-text retrieval.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

german-legal-mcp MCP server

Copy to your README.md:

Score Badge

german-legal-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metaneutrons/german-legal-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server