Skip to main content
Glama
metaneutrons

German Legal MCP Server

by metaneutrons

rii:search

Search German federal and state court decisions using case numbers or keywords to find relevant rulings and retrieve detailed case information.

Instructions

Search for court decisions. Default source "bund": federal courts (BVerfG, BGH, BVerwG, BFH, BAG, BSG, BPatG). Source "bayern": Bavarian state courts (AG, LG, OLG, VG, VGH, FG, ArbG, LAG, BayVerfGH). Returns list of decisions with metadata and doc IDs for retrieval.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query. For file numbers (Aktenzeichen): use ONLY the file number without keywords (e.g., "I ZR 115/16"). For topics: keywords (e.g., "Metall auf Metall", "BGB § 823").
limitYesMaximum number of results (default: 10)
sourceYesSource: "BUND" (federal, default) or "BY" (Bavarian state courts via gesetze-bayern.de)BUND

Implementation Reference

  • The 'handleSearch' private method in RiiProvider executes the primary logic for 'rii:search' (BUND source).
    private async handleSearch(args: Record<string, unknown>): Promise<ToolResult> {
      const { query, limit = 10 } = args as { query: string; limit?: number };
    
      const url = `${BASE_URL}/js_peid/Suchportlet2/media-type/html`;
      logger.info('Searching', { query });
    
      const response = await axios.get(url, {
        params: {
          formhaschangedvalue: 'yes',
          eventSubmit_doSearch: 'suchen',
          action: 'portlets.jw.MainAction',
          form: 'jurisExpertSearch',
          desc: 'text',
          query,
        },
        headers: { 'User-Agent': 'Mozilla/5.0 (compatible; German-Legal-MCP/1.0)' },
      });
    
      const $ = cheerio.load(response.data);
      const results: Array<{ title: string; docId: string; snippet: string }> = [];
    
      $('a.TrefferlisteHervorheben[id^="tlid"]').each((_, el) => {
        const href = $(el).attr('href') || '';
        const docIdMatch = href.match(/doc\.id=([^&]+)/);
        const title = $(el).attr('title') || $(el).text().trim();
        if (docIdMatch && !$(el).attr('id')?.includes('.')) {
          const snippet = $(el).closest('tr').find('.docPreview').text().trim();
          results.push({ title, docId: docIdMatch[1], snippet });
        }
      });
    
      const limitedResults = results.slice(0, limit);
      const markdown = limitedResults
        .map((r, i) => `${i + 1}. **${r.title}**\n   - Doc ID: \`${r.docId}\`${r.snippet ? `\n   - ${r.snippet}` : ''}`)
        .join('\n\n');
    
      return { content: [{ type: 'text', text: `Found ${results.length} results (showing ${limitedResults.length}):\n\n${markdown}` }] };
    }
  • Tool definition for 'rii:search' including input schema and description.
    {
      name: 'rii:search',
      description:
        'Search for court decisions. Default source "bund": federal courts (BVerfG, BGH, BVerwG, BFH, BAG, BSG, BPatG). ' +
        'Source "bayern": Bavarian state courts (AG, LG, OLG, VG, VGH, FG, ArbG, LAG, BayVerfGH). ' +
        'Returns list of decisions with metadata and doc IDs for retrieval.',
      inputSchema: z.object({
        query: z.string().describe('Search query. For file numbers (Aktenzeichen): use ONLY the file number without keywords (e.g., "I ZR 115/16"). For topics: keywords (e.g., "Metall auf Metall", "BGB § 823").'),
        limit: z.number().optional().default(10).describe('Maximum number of results (default: 10)'),
        source: z.enum(['BUND', 'BY']).optional().default('BUND').describe('Source: "BUND" (federal, default) or "BY" (Bavarian state courts via gesetze-bayern.de)'),
      }),
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the default source behavior, enumerates the specific court abbreviations covered by each source (BVerfG, BGH, etc.), and clarifies the return format (list with metadata and doc IDs). It omits rate limits or auth requirements, but the core behavioral contract is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently cover: (1) purpose, (2) default source with federal court details, and (3) alternative source with Bavarian courts and return value. Every sentence adds unique information; no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description appropriately discloses the return structure ('list of decisions with metadata and doc IDs'). Combined with the detailed court abbreviations for the German legal domain, the description provides complete context for a 3-parameter search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% description coverage (baseline 3), the description adds significant domain-specific value by mapping the abstract 'federal' and 'Bavarian' source values to their constituent court types (BVerfG, BGH, AG, LG, etc.), which helps the agent understand the legal domain scope beyond the schema's technical enum values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and resource ('court decisions'), immediately clarifying scope. It distinguishes from sibling 'rii:get_decision' by noting the return value includes 'doc IDs for retrieval,' implying this tool finds documents while the sibling retrieves them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for the 'source' parameter (federal vs. Bavarian courts) and implies when to use this tool versus retrieval by noting it returns IDs rather than full documents. However, it does not explicitly name 'rii:get_decision' as the alternative for full-text retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/metaneutrons/german-legal-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server