Lawbster
Server Details
MCP server for German & EU law. Verified, citable legal context for any LLM. Daily updates from official sources, hosted in Germany
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 10 of 10 tools scored.
Each tool has a clearly distinct purpose: search, lookup, batch lookup, context, TOC, listing laws, citing decisions, statistics, and resource management. No functional overlap exists.
All legal tools follow a consistent 'legal_verb_noun' snake_case pattern. Two standard MCP tools (list_resources, read_resource) lack the prefix, but this is a minor deviation.
10 tools is well-scoped for a legal information retrieval server. It covers search, lookup, context, TOC, listing, statistics, and citing decisions without being too sparse or overwhelming.
The tool surface covers the core workflow (search→lookup→context→citing decisions) and auxiliary features (TOC, listing, statistics). A minor gap is the lack of a tool to retrieve the full text of an entire law, but this is not essential.
Available Tools
10 toolslegal_find_citing_decisionsAInspect
Find German federal court decisions (BGH, BVerwG, BFH, BAG, BSG, BPatG, BVerfG) citing a specific legal provision. Matching is at paragraph level: '§ 280 Abs. 1 BGB' finds all decisions citing § 280 BGB. Returns decisions with Leitsatz (key legal principle summary) and ECLI identifier. Does not cover state court decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of decisions to return (default 10). | |
| cited_norm | Yes | The legal provision, e.g. '§ 823 BGB', 'Art. 6 DSGVO'. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the burden of behavioral disclosure. It explains paragraph-level matching, return fields (Leitsatz, ECLI), and the scope of the search. This is sufficient for a read-only search tool, though it does not mention permissions, rate limits, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exceptionally concise, consisting of three sentences that cover purpose, matching logic, return fields, and a limitation. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has two parameters and an output schema (implied by context signals), the description covers all essential aspects: what it finds, how it matches, what it returns, and what it excludes. No gaps are apparent for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds significant value for the `cited_norm` parameter with a concrete example showing how subsection granularity is handled, which is not obvious from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: finding German federal court decisions citing a specific legal provision. It lists the covered courts (BGH, BVerwG, etc.) and distinguishes itself from siblings like legal_search by specifying the exact use case and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool (for federal decisions citing a provision) and explicitly states a limitation (does not cover state court decisions). However, it does not directly compare to alternatives like legal_search, leaving the agent to infer the differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_get_contextAInspect
Get adjacent norms (paragraphs/articles) before and after a target provision in document order. Use when a legal question may span consecutive provisions or when surrounding context is needed to understand a norm's scope.
Requires a norm_id from a prior legal_search or legal_lookup result. Returns the target norm plus up to 10 neighbors in each direction.
| Name | Required | Description | Default |
|---|---|---|---|
| after | No | Number of norms after the target (max 10). | |
| before | No | Number of norms before the target (max 10). | |
| norm_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description bears full burden. It discloses return behavior: 'Returns the target norm plus up to 10 neighbors in each direction.' However, it does not explicitly mention that the operation is read-only (likely safe) or any potential side effects. Still, it provides reasonable transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with two clear sentences plus a usage hint. It front-loads the action and purpose, with no wasted words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and an output schema (indicated), the description provides a high-level summary of return values. It covers parameter constraints (up to 10 neighbors) and prerequisite (norm_id from prior tool). Completeness is strong, though a note on read-only nature would slightly improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (2 of 3 parameters have descriptions). The description compensates for the missing `norm_id` description by stating it must come from a prior legal_search or legal_lookup result. This adds meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the verb 'get' and the resource 'adjacent norms' (paragraphs/articles) in document order. It distinguishes the tool from siblings like 'legal_search' and 'legal_lookup' by focusing on retrieving surrounding context rather than conducting searches or lookups.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when a legal question may span consecutive provisions or when surrounding context is needed to understand a norm's scope.' Also provides a prerequisite: 'Requires a norm_id from a prior legal_search or legal_lookup result.' While it doesn't explicitly state when not to use, the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_get_statsAInspect
Get database and search index statistics including counts of laws, norms, and indexed vectors. Use for health checks or to understand the scope of the legal corpus.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states a read-like operation (get) but does not disclose any potential side effects, performance costs, or authentication requirements. For a simple stat tool with no parameters, this is adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, with no unnecessary words. The first sentence states the function, and the second provides usage guidance. It is perfectly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, output schema exists), the description sufficiently covers its purpose and use cases. The output schema can document return values, so the description does not need to elaborate further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to explain any. Baseline is 4 for no parameters, and the description adds context about what the tool returns, which is beneficial.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves database and search index statistics, specifying the types of counts (laws, norms, indexed vectors). This distinct purpose sets it apart from sibling tools like legal_search or legal_lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly recommends usage for health checks or understanding corpus scope, providing clear context. However, it does not mention when not to use it or name alternative tools for related tasks.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_get_tocAInspect
Get the table of contents of a specific law or court decision. Returns norm keys (e.g. '§ 1', 'Art. 3'), titles, and chapter headings in document order.
Use when you need an overview of a law's structure before drilling into specific provisions. Pass a returned norm_id to legal_get_context to read the full text. Paginated for large laws.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max entries per page (default 100). | |
| offset | No | Start position for pagination. | |
| source_type | No | 'gii', 'eurlex', 'eurlex_caselaw', 'rechtsprechung', 'sachsen_gesetze', or 'bayern_gesetze'. Leave empty to auto-detect. | |
| law_abbreviation | Yes | Law abbreviation, e.g. 'bgb', 'dsgvo', 'stgb'. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavior. It mentions pagination ('Paginated for large laws') and auto-detection of source_type, but does not explicitly state that the operation is read-only, nor describe error handling or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each purposeful. First sentence states purpose and output. Second gives usage guidance. Third mentions pagination. No redundant or irrelevant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists, the description adequately describes return elements (norm keys, titles, chapter headings). It covers pagination, auto-detection, and usage flow for drilling into provisions. Completeness is high for a tool with 4 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema descriptions cover all 4 parameters (100%), so baseline is 3. The description adds value by providing concrete examples for law_abbreviation ('bgb', 'dsgvo', 'stgb') and listing the source_type enum values plus auto-detect guidance. The pagination context clarifies limit/offset usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's primary function: retrieving the table of contents of a specific law or court decision. It specifies return elements (norm keys, titles, chapter headings) and distinguishes it from sibling tool legal_get_context by advising to pass norm_id for full text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use when needing an overview before drilling into provisions, and mentions passing norm_id to legal_get_context for full text. However, it does not explicitly compare with other siblings like legal_search or legal_lookup, nor provide a when-not-to-use scenario.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_list_lawsAInspect
List available laws, regulations, and court decisions in the database. Returns abbreviation, title, source type, jurisdiction, document kind, and version date for each entry.
Always pass a search term or source_type filter — the unfiltered list contains thousands of entries and is too large for context. Useful for discovering valid law abbreviations to use as filters in legal_search.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results per page (default 50). | |
| offset | No | Start position for pagination. | |
| search | No | Case-insensitive search on abbreviation and title, e.g. 'bgb', 'datenschutz'. | |
| source_type | No | Filter: 'gii', 'eurlex', 'eurlex_caselaw', 'rechtsprechung', 'sachsen_gesetze', or 'bayern_gesetze'. | |
| jurisdiction | No | Filter by jurisdiction: 'de', 'eu', 'de_by', 'de_sn'. Leave empty for all jurisdictions. | |
| document_kind | No | Filter by document type: 'statute', 'regulation', 'directive', 'decision'. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return fields and warns about data volume but doesn't mention read-only nature or side effects. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise paragraphs: first for purpose and output, second for usage guidance. No fluff, front-loaded, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With output schema present, description explains return fields. All 6 parameters documented in schema, usage warnings given. Complete for a list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% parameter descriptions. The description adds context ('search or source_type filter') but does not significantly enhance beyond schema. Baseline score 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists laws, regulations, and court decisions with specific return fields (abbreviation, title, etc.). It distinguishes from siblings by noting its utility for discovering law abbreviations for legal_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises to always pass a search or source_type filter due to large unfiltered list, and indicates usefulness for finding valid abbreviations for legal_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_lookupAInspect
Get the full text of a specific legal provision by exact citation (e.g. '§ 823 BGB', 'Art. 6 DSGVO', '§ 280 Abs. 1 BGB'). Citation order is flexible — '§ 9 DSGVO', 'DSGVO Art. 9', 'Artikel 9 DSGVO' all resolve correctly.
IMPORTANT: Only call this tool AFTER legal_search has confirmed the correct provision. Do not guess citations from training data — always search first, then look up.
| Name | Required | Description | Default |
|---|---|---|---|
| citation | Yes | Full citation string, e.g. '§ 823 BGB' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses flexible citation order (fuzzy matching) and implies a read operation. However, it does not explicitly state safety or idempotency, though the context makes it clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, zero wasted words. The caveat is appropriately placed and brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with a single parameter and an output schema, the description is sufficient. It covers when to use it and the input format. It could mention error handling for invalid citations, but the presence of an output schema mitigates the need to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining citation format flexibility (order independence, prefix variations). This goes beyond the schema's 'Full citation string' description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving the full text of a legal provision by citation. It uses specific verbs like 'Get' and 'resolve', and distinguishes the tool from siblings like legal_search (search vs lookup) through explicit workflow instructions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states that the tool should only be called after legal_search, and warns against guessing citations from training data. This provides clear when-to-use and when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_lookup_batchAInspect
Look up full text of multiple legal provisions in a single call (exact match). Accepts 1-20 citations (e.g. ['§ 823 BGB', 'Art. 6 DSGVO']). Use this instead of multiple legal_lookup calls.
IMPORTANT: Only call AFTER legal_search has confirmed the provisions. Returns exact matches only — provisions not found appear as found=false. For fuzzy matching of hard-to-find provisions, use individual legal_lookup.
| Name | Required | Description | Default |
|---|---|---|---|
| citations | Yes | List of citation strings, e.g. ['§ 823 BGB', 'Art. 6 DSGVO']. Max 20. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since annotations are absent, the description fully discloses behavioral traits: exact match behavior, return of found=false for unfound provisions, citation limit of 20, and the prerequisite of prior legal_search confirmation. This covers key aspects beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise with four sentences, each providing essential information. It is front-loaded: first sentence states core purpose, followed by details, usage constraints, and alternatives. No filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter with complete schema coverage and the presence of an output schema, the description provides sufficient context. It covers batch size, exact match, return behavior, and prerequisites. For a tool of this complexity, no further details are needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already describes the 'citations' parameter with format and limits. The description adds crucial semantics: exact match confirmation, output behavior (found=false), and the recommended usage context (after legal_search). This enriches understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Look up full text of multiple legal provisions in a single call (exact match).' It specifies the verb (look up), resource (legal provisions), and mode (batch, exact match). It distinguishes itself from sibling tools like legal_lookup (individual) and legal_search (fuzzy search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this instead of multiple legal_lookup calls.' and 'Only call AFTER legal_search has confirmed the provisions.' It also provides alternatives: 'For fuzzy matching of hard-to-find provisions, use individual legal_lookup.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
legal_searchAInspect
Search legal documents across jurisdictions (DE federal, EU, Bavaria, Saxony) using hybrid semantic + keyword search. Returns ranked results with content snippets, not full text.
Always use this tool first before legal_lookup. Do not rely on training data to identify the correct provision — similar rules often exist across multiple laws. Rephrase colloquial language into legal terminology for best results. Formuliere Suchanfragen als natürliche Sätze, nicht als Keyword-Listen (z.B. 'Wann verjährt ein Schadensersatzanspruch?' statt 'Verjährung Schadensersatz Frist BGB'). Das System durchsucht Gesetzestexte — verwende die Sprache des Gesetzes, nicht Doktrin-Begriffe (z.B. 'Auslegung mehrdeutiger Klauseln' statt 'contra proferentem'). Leave all filters empty for thematic questions; only set law_abbreviation when the user explicitly names a specific law. When results span multiple laws or versions, check the legal://rechtsrahmen resource to pick the correct jurisdiction (e.g. EU vs national, substantive vs procedural).
COMMON PITFALLS — choose the correct law:
Procedural law by jurisdiction: ZPO (zivilrecht), STPO (strafrecht), VWGO (verwaltungsrecht), ARBGG (arbeitsrecht), SGG (sozialrecht), BVERFGG (verfassungsrecht), FAMFG (familienrecht)
AO, ESTG, KSTG, USTG, FGO: AO=procedure, EStG/KStG/UStG=substantive
VVG, BGB: Insurance rescission → BGB §§ 812ff, not VVG
VERSAUSGLG, FAMFG: VersAusglG=substantive, FamFG=procedure
APOG, AMG: ApoG=operation, AMG=drug approval
WEHRPFLG, SG: WPflG=conscription, SG=soldiers
AGG, BETRVG: AGG=anti-discrimination, BetrVG=works council
| Name | Required | Description | Default |
|---|---|---|---|
| court | No | Filter by court: 'BGH', 'BVerfG', 'BVerwG', 'BFH', 'BAG', 'BSG', 'BPatG'. | |
| query | Yes | Search query as a natural sentence using statutory language (not keywords or doctrinal terms). | |
| top_k | No | Number of results to return (default 5). | |
| chapter | No | Filter by chapter/section within a law. Rarely needed — can reduce recall. Only useful with law_abbreviation. | |
| date_to | No | Include results until this date (YYYY-MM-DD). | |
| date_from | No | Include results from this date onwards (YYYY-MM-DD). | |
| source_type | No | Filter by source: 'gii' (German federal laws like BGB, StGB), 'eurlex' (EU regulations like DSGVO, DSA), 'eurlex_caselaw' (EU court decisions, EuGH/EuG), 'rechtsprechung' (federal court decisions), 'sachsen_gesetze' (Saxony), 'bayern_gesetze' (Bavaria). Leave empty for cross-source search. | |
| jurisdiction | No | Filter by jurisdiction: 'de' (German federal), 'eu' (EU), 'de_by' (Bavaria), 'de_sn' (Saxony). Leave empty for cross-jurisdiction search. | |
| decision_type | No | Filter by decision type: 'Urteil' or 'Beschluss'. | |
| document_kind | No | Filter by document type: 'statute', 'regulation', 'directive', 'decision'. Leave empty for all types. | |
| law_abbreviation | No | Filter by law abbreviation, e.g. 'bgb', 'dsgvo'. Leave empty for thematic questions. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It explains the search method (hybrid semantic+keyword), that results are ranked with snippets, and mentions source types and jurisdictions. It does not explicitly state read-only or side effects, but the context of search implies no destructive behavior. Lacks explicit safety statement but sufficient for agent understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is lengthy but well-structured: first paragraph gives purpose, followed by usage instructions in bullet style, then common pitfalls. Front-loaded with key action (use before legal_lookup). Some redundancy in the pitfalls section could be trimmed, but overall efficient given the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters (many optional), no annotations, and presence of output schema, the description is remarkably complete. It covers when to use, query construction, filter strategy, common pitfalls, and even points to other resources (legal://rechtsrahmen). The only minor gap is lack of output format detail, but output schema likely handles that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds value beyond schema by explaining when to use filters (e.g., 'leave all filters empty for thematic questions'), provides concrete query examples, and the pitfalls section indirectly maps law abbreviations to contexts. It goes beyond mere parameter listing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches legal documents across specified jurisdictions (DE federal, EU, Bavaria, Saxony) using hybrid semantic+keyword search, returning ranked snippets. It distinguishes itself from sibling 'legal_lookup' by stating 'Always use this tool first before legal_lookup' and notes it returns snippets not full text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to use this tool before legal_lookup, provides query rephrasing guidance, language preferences (German), and filter usage rules (leave empty for thematic questions). Includes a detailed 'COMMON PITFALLS' section to choose correct law, covering many real-world confusion cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_resourcesARead-onlyInspect
List all available resources and resource templates.
Returns JSON with resource metadata. Static resources have a 'uri' field, while templates have a 'uri_template' field with placeholders like {name}.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, so the tool is safe. The description adds transparency by describing the return format (JSON with metadata) and differentiating static resources from templates, which is beyond the annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. The main action is front-loaded, and additional details are provided efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (zero parameters, output schema exists), the description is complete. It explains the output structure and the difference between resource types, which is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With no parameters and 100% schema coverage, the description does not need to add parameter details. According to guidelines, empty parameter list yields baseline score of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all available resources and resource templates, distinguishing between static resources and templates with placeholders. This is specific and distinct from sibling tools like read_resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to list all resources) but does not explicitly mention when not to use or compare with siblings like read_resource. No exclusions or alternatives are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
read_resourceARead-onlyInspect
Read a resource by its URI.
For static resources, provide the exact URI. For templated resources, provide the URI with template parameters filled in.
Returns the resource content as a string. Binary content is base64-encoded.
| Name | Required | Description | Default |
|---|---|---|---|
| uri | Yes | The URI of the resource to read |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark the tool as read-only. The description adds value by explaining return format (string, base64 for binary), which is beyond annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences are concise and front-loaded with the main action. Could be slightly more streamlined, but no unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Output schema exists, so return value details are not required. Description covers both resource types and encoding. Minor omission: no mention of error handling or missing resources, but overall complete for a simple read tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only one parameter and schema coverage 100%, the description adds critical semantics: it explains how to use the URI parameter for static vs templated resources, which the schema description ('The URI of the resource to read') does not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Read a resource by its URI', which is a specific verb+resource pair. It distinguishes from sibling 'list_resources' by focusing on reading a single resource by URI, and adds nuance about static vs templated URIs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance for static and templated URIs, telling the agent when to use exact URI vs filled template. Does not explicitly state when not to use or alternatives, but context from sibling names makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!