Fedlex Connector
Server Details
Search Swiss federal legislation: laws, articles, amendments via the Fedlex SPARQL endpoint.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- JayTheSkier/fedlex-connector
- GitHub Stars
- 3
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 4 of 4 tools scored. Lowest: 3.3/5.
Each tool has a clearly distinct purpose: get_article retrieves a specific article by number, get_law_text fetches full acts or sections, list_amendments shows version dates, and search_by_title finds laws by name. The descriptions explicitly differentiate them and guide usage, eliminating overlap.
All tool names follow a consistent verb_noun pattern (get_article, get_law_text, list_amendments, search_by_title). The verbs (get, list, search) are appropriate and uniform, with no mixing of naming conventions.
With 4 tools, this server is well-scoped for its domain of Swiss federal law access. Each tool serves a specific, necessary function without redundancy, making the count appropriate and manageable.
The tool set covers key operations for legal research: retrieving specific articles, full texts, amendment history, and title searches. A minor gap exists in lacking a tool for searching article content directly, but agents can work around this using get_law_text as described.
Available Tools
4 toolsget_articleAInspect
Retrieve a single article when you already know the EXACT article number (e.g. from a cross-reference). Do NOT call this tool repeatedly to search for provisions — use get_law_text instead to fetch the full act or a section and locate relevant articles in the text.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Consolidation date in YYYY-MM-DD format. Defaults to the latest available version. | |
| article | Yes | Article number (e.g. '3', '28a', '41') | |
| language | No | Language (default: de) | |
| rs_number | Yes | RS/SR number (e.g. '210' for CC, '220' for CO, '311.0' for CP) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively discloses key behavioral traits: it's a retrieval operation (implied read-only), requires exact article numbers, and warns against misuse for searching. However, it doesn't mention potential errors, rate limits, or authentication needs, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with two sentences that directly address purpose and usage guidelines. Every sentence earns its place by providing critical context without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no annotations, no output schema), the description is largely complete. It covers purpose, usage guidelines, and behavioral constraints well. However, without annotations or output schema, it could benefit from mentioning response format or error handling, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds minimal value beyond the schema by implying 'article' and 'rs_number' are required for exact lookup, but doesn't provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a single article when you already know the EXACT article number.' It specifies the verb ('retrieve'), resource ('article'), and distinguishes it from sibling tools by explicitly contrasting with get_law_text for searching provisions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when you already know the EXACT article number') and when not to ('Do NOT call this tool repeatedly to search for provisions'), naming the alternative tool ('use get_law_text instead'). This clearly differentiates it from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_law_textAInspect
Retrieve the official consolidated text of a Swiss federal act (or a specific title/chapter) directly from Fedlex (fedlex.admin.ch). This is the PRIMARY tool for answering Swiss law questions — always start here. Fetch the full act or a specific section, then locate relevant provisions in the returned text. Prefer this over get_article unless you already know the exact article number.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for paginated results (default: 1). Large acts are split across multiple pages. | |
| section | No | Limit to a specific title, chapter, or part (e.g. 'Titre huitième', 'Zweiter Teil'). If omitted, returns the full act. | |
| language | No | Language (default: de) | |
| rs_number | Yes | RS/SR number (e.g. '210' for CC, '220' for CO) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the source ('Fedlex') and that large acts are paginated, which adds useful context. However, it doesn't describe authentication needs, rate limits, error handling, or the return format (though no output schema exists), leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and primary role, the second provides usage guidance and differentiation from siblings. Every sentence adds critical information with zero waste, making it front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, no annotations, no output schema), the description does well by clarifying the primary role, source, and sibling differentiation. However, it lacks details on return values (e.g., text format, pagination handling) and error cases, which could be important for a retrieval tool without output schema, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by implying the 'rs_number' is for specific acts (e.g., '210' for CC) and that 'section' can limit to titles/chapters, but doesn't provide syntax or format details beyond what the schema offers. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('retrieve'), resource ('official consolidated text of a Swiss federal act'), and scope ('directly from Fedlex'). It explicitly distinguishes this tool from sibling 'get_article' by positioning it as the PRIMARY tool for Swiss law questions, making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('always start here' for Swiss law questions) and when to prefer alternatives ('prefer this over get_article unless you already know the exact article number'). It also mentions 'fetch the full act or a specific section' to clarify scope, offering comprehensive usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_amendmentsBInspect
List consolidation version dates for a Swiss federal act. Returns the dates each consolidated version took effect.
| Name | Required | Description | Default |
|---|---|---|---|
| since | No | Start date in YYYY-MM-DD format (default: 1 year ago) | |
| language | No | Language for amendment titles (default: de) | |
| rs_number | Yes | RS/SR number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns dates but does not cover important aspects like response format (e.g., list structure, pagination), error handling, rate limits, or authentication needs. This leaves significant gaps for a tool that likely involves data retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and return value with zero wasted words. It is appropriately sized for a straightforward listing tool, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks details on behavior, usage context, and output specifics, which are needed for full agent understanding. It meets the minimum viable threshold but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning beyond what the schema provides, such as explaining the significance of 'rs_number' or how 'since' affects results. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List consolidation version dates') and resource ('for a Swiss federal act'), and specifies what is returned ('dates each consolidated version took effect'). It distinguishes from siblings like 'get_article' or 'get_law_text' by focusing on amendment timelines rather than content retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search_by_title' or 'get_law_text'. It lacks context about prerequisites, such as needing the RS number, and does not mention any exclusions or typical use cases for amendment tracking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_by_titleAInspect
Search Swiss federal legislation titles in the Classified Compilation (RS/SR) on Fedlex. Use to find the RS number of a law when you know its name but not its number. Searches titles only, not article content. Returns only acts currently in force.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Keywords to match against act titles (e.g. 'code civil', 'protection des données') | |
| language | No | Language for results (default: de) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying the search scope limitation and that it returns only acts currently in force. However, it doesn't mention response format, pagination, rate limits, authentication requirements, or error conditions, leaving gaps for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with three tightly focused sentences that each earn their place: first establishes purpose, second provides usage context, third sets behavioral constraints. No wasted words, front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with no annotations and no output schema, the description does well by covering purpose, usage context, and key behavioral constraints. However, it doesn't describe the return format or structure, which would be important for the agent to understand what to expect from results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema descriptions. It mentions searching by title but doesn't provide additional syntax or format guidance for the query parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search Swiss federal legislation titles'), resource ('Classified Compilation (RS/SR) on Fedlex'), and scope ('titles only, not article content'). It explicitly distinguishes from content-based searches and specifies it's for finding RS numbers when names are known, providing excellent differentiation from sibling tools like get_article or get_law_text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when you know its name but not its number') and when not to use it ('Searches titles only, not article content'). It also implicitly suggests alternatives by mentioning what it doesn't search, helping the agent choose between this and sibling tools like get_article for content searches.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!