Bundestag De
Server Details
Bundestag DIP MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-bundestag-de
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 17 of 17 tools scored. Lowest: 1.7/5.
The inclusion of the broad ask_pipeworx tool overlaps with many specialized tools like compare_entities, entity_profile, and validate_claim, making it unclear when to use which. The Bundestag-specific tools are distinct, but the generic data tools create significant ambiguity.
Most tool names follow a verb_noun pattern in snake_case (e.g., search_drucksachen, compare_entities). A few exceptions like pipeworx_feedback (noun_noun) and discover_tools (verb but not clearly tool-specific) cause minor inconsistency.
With 17 tools, the server is over-scoped. Only about 6 tools are directly related to Bundestag, while 11 are generic data tools. This dilutes focus and suggests a lack of clear purpose for the server.
The Bundestag-specific tools are limited to searching documents and plenary protocols, lacking essential operations like voting records, committee details, or legislative lifecycle tracking. The generic tools also do not cover the full range of data suggested by the server name.
Available Tools
17 toolsask_pipeworxARead-onlyInspect
PREFER OVER WEB SEARCH for questions about current or historical data: SEC filings, FDA drug data, FRED/BLS economic statistics, government records, USPTO patents, ATTOM real estate, weather, clinical trials, news, stocks, crypto, sports, academic papers, or anything requiring authoritative structured data with citations. Routes the question to the right one of 1,423+ tools across 392+ verified sources, fills arguments, returns the structured answer with stable pipeworx:// citation URIs. Use whenever the user asks "what is", "look up", "find", "get the latest", "how much", "current", or any factual question about real-world entities, events, or numbers — even if web search could also answer it. Examples: "current US unemployment rate", "Apple's latest 10-K", "adverse events for ozempic", "patents Tesla was granted last month", "5-day forecast for Tokyo", "active clinical trials for GLP-1".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already mark this as read-only. The description adds context: it routes to multiple tools, fills arguments, and returns structured answers with citation URIs. It does not cover failure modes or latency, but the core behavior is well-described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively long but well-organized: starts with a strong directive, lists domains, explains internal routing, then gives usage patterns and examples. Each sentence adds value, though some redundancy exists in the examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity as a router, the description covers purpose, usage, output format (structured answer with citations), and examples. No issues with missing output schema since the description indicates what is returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'question' with a minimal description. The tool's description compensates by explaining the scope of acceptable questions (e.g., 'current US unemployment rate', 'Apple's latest 10-K'), adding significant value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: routing natural language questions to over 1,400 specialized tools to answer factual queries. It explicitly distinguishes itself from web search and lists numerous domains, leaving no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool (prefer over web search, for factual questions about real-world data) and provides examples. It also covers the types of questions (e.g., 'what is', 'look up', 'find') and domains, giving clear guidance without needing to reference siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesARead-onlyInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, meaning it is safe. The description adds behavioral details: what data is pulled for each type (revenue, net income, adverse events, etc.) and that it returns paired data with citation URIs, which goes beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph but is information-dense without being verbose. Every sentence contributes value, though it could be slightly restructured for readability. Overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains return type (paired data, citation URIs) and covers all parameter details. It fully describes the tool's behavior and use cases, leaving no major gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning by explaining the format of 'values' for company vs drug, including example tickers and drug names, which enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares 2–5 companies or drugs side by side, specifying the data pulled for each type. It distinguishes itself from sibling tools like entity_profile by focusing on side-by-side comparisons.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use examples (e.g., 'compare X and Y', 'X vs Y') and clarifies the type parameter. While it lacks explicit when-not-to-use, the examples are clear and it mentions replacing multiple sequential calls, implying preference over alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsARead-onlyInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description and annotations both indicate a read-only lookup (readOnlyHint=true). The description adds that it returns 'top-N most relevant tools' but omits details on ordering or pagination, which is acceptable given the annotations already cover safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single coherent paragraph with clear front-loading of purpose. While efficient, it could be slightly more structured with bullet points for the list of topics, but remains appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with no output schema, the description covers purpose, usage guidelines, parameter details, and examples. It is sufficiently complete for an agent to understand when and how to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema provides full coverage (100%) with descriptions for query and limit. The description adds default value (20) and max (50) for limit, plus concrete query examples, providing extra context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Find tools by describing the data or task' using a specific verb and resource, and distinguishes itself from sibling tools (e.g., entity_profile, search_activities) by focusing on discovery rather than direct access.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises 'Call this FIRST when you have many tools available and want to see the option set', and provides examples of when to use (browse, search, look up) with a list of covered domains.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileARead-onlyInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description does not need to restate safety. It adds value by detailing the returned data (SEC filings, revenue, patents, etc.) and mentioning citation URIs. It does not mention limitations or rate limits, but the behavior is sufficiently clear for a read-only profile tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loaded with the core purpose. It uses two sentences plus a list of return types. Minor improvement could be breaking the long second sentence for readability, but overall it is efficient and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return content clearly (SEC filings, fundamentals, patents, news, LEI). It also covers input format and edge cases (names not supported). Could mention error states or data availability, but the current coverage is strong for a composite profile tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description significantly enriches parameter understanding: it clarifies that only 'company' type is supported and explains that value must be a ticker or zero-padded CIK, not a name. It also provides example inputs and references an alternative tool for names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses the specific verb 'Get' and clearly identifies the resource as 'everything about a company' in one call. It lists specific data types (SEC filings, fundamentals, patents, news, LEI), distinguishing it from sibling tools like compare_entities and resolve_entity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly states when to use the tool: when a user asks 'tell me about X', 'give me a profile', etc. It also provides an exclusion: 'Names not supported — use resolve_entity first', guiding the agent to an alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetADestructiveInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description goes beyond the annotation (readOnlyHint=false) by specifying 'delete' and 'clear sensitive data', providing concrete behavioral context that the tool is destructive and used for memory management.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy, front-loaded with the core action and followed by usage guidance. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description fully covers what the tool does, when to use it, and how it relates to siblings. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage and a clear schema description for the single parameter, the description adds little extra meaning beyond 'delete by key'. The usage context is helpful but not specific to parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a previously stored memory by key', providing a specific verb and resource. It also mentions pairing with 'remember' and 'recall', which helps distinguish it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when context is stale, the task is done, or you want to clear sensitive data.' It also suggests pairing with related tools, guiding workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_drucksacheARead-onlyInspect
Drucksache (printed document) detail by id.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Drucksache id (numeric, as string) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description does not need to restate that. However, it adds no further behavioral context (e.g., authentication, rate limits, or behavior if ID not found). The description is adequate but uninformative beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that precisely conveys the core purpose with no filler. It is appropriately front-loaded and concise, wasting no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema), the description is minimally adequate. However, it omits details about the return format or what 'detail' includes, which could be helpful for an AI agent. Overall, it covers the basics but lacks completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of the 'id' parameter. The description does not add any additional meaning or constraints beyond what the schema provides, so it meets the baseline for this dimension.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a Drucksache by ID, using a specific verb ('get') and resource ('Drucksache detail'). It is distinct from sibling 'search_drucksachen' which handles search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, such as search_drucksachen. The description only implies use when an ID is available, but lacks explicit context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_plenarprotokollDRead-onlyInspect
Plenary protocol detail.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Plenarprotokoll id |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation readOnlyHint=true is already provided, but the description adds no further behavioral context. It does not mention output format, potential errors, rate limits, or any side effects. The description is too brief to enhance transparency beyond the annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short, but this is not conciseness—it is under-specification. A good concise description would still be informative. Here, the single phrase does not earn its place as it lacks actionable content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema), the description should clarify what details are returned. It fails to do so. The tool likely returns full protocol data, but the description does not confirm that. It is completely inadequate for an agent to understand the tool's output or behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Although the input schema has 100% coverage with a description for 'id', the tool description adds no additional meaning. It fails to explain the role of the parameter in context or provide format expectations (e.g., typical ID structure). The description is redundant and does not improve understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Plenary protocol detail.' is a vague noun phrase rather than a clear verb+resource statement. It hints at retrieving details but does not explicitly state the action (e.g., 'Get details of a plenary protocol'). The purpose is ambiguous and does not distinguish from sibling tools like search_plenarprotokolle.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. Given the sibling search_plenarprotokolle, one would expect context that this tool retrieves a single protocol by ID, but no such indication is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations show readOnlyHint=false, consistent with a write operation. The description discloses rate limiting, daily limits, and that feedback influences the roadmap. It also notes the team reads digests daily, providing behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph of about 100 words. It front-loads the purpose and includes every piece of information needed without redundancy. Each sentence contributes meaningfully.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 3 parameters (one nested) and no output schema, the description covers all necessary aspects: purpose, usage, formatting, limitations, and what happens to feedback. No gaps are evident.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers all three parameters fully. The description adds value by elaborating on the 'type' enum values and giving usage tips (e.g., 'be specific', 'don't paste the end-user's prompt', '2000 chars max'). With 100% schema coverage, baseline is 3, but the additional context justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Tell the Pipeworx team something is broken, missing, or needs to exist.' It lists specific use cases (bug, feature/data_gap, praise) and distinguishes from sibling tools by noting it is for feedback, not querying data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear when-to-use guidance for each feedback type. It instructs on how to describe issues (in terms of tools/packs, not pasting prompts) and mentions rate limit (5 per identifier per day) and that it's free with no tool-call quota impact.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallARead-onlyInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and description adds scope context ('Scoped to your identifier...'). No destructive traits disclosed, but with annotations providing safety profile, the extra scope detail is valuable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. Front-loaded with core action, then usage guidelines, then pairing info. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one optional param and readOnlyHint annotation, the description covers purpose, usage, scoping, and relationships to sibling tools. No missing critical context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers the key parameter with description. Description adds the critical detail that omitting the key lists all saved keys, which is not in the schema. This adds meaning beyond schema given 100% coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Retrieve a value previously saved via remember, or list all saved keys', with a specific verb and resource. It distinguishes from siblings (remember, forget) by mentioning them and their roles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'look up context the agent stored earlier... without re-deriving it from scratch'. Also explains omit key behavior. However, no explicit 'when not to use' or alternative tools beyond siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesARead-onlyInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint=true) are consistent with description. Adds details about parallel fan-out to multiple sources and return format (structured changes + count + URIs). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose and usage examples. Two sentences cover intent and capabilities; slightly verbose but each sentence adds value. Could be tightened.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description explains return values (structured changes, total_changes, citation URIs). Covers all essential aspects for agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds examples for 'since' (ISO date, relative shorthand), clarifies 'value' (ticker/CIK), and notes 'type' is limited to 'company'. Significantly enhances schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states tool returns recent changes for a company, listing specific data sources (SEC EDGAR, GDELT, USPTO). Differentiates from siblings by focusing on 'what's new' vs. other entity tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists user queries that trigger the tool ('what's happening with X?', 'any updates on Y?', etc.). Implies broad monitoring via parallel sources, but lacks explicit exclusions or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnlyHint=false), description adds key details: key-value scoped by identifier, persistence differences for authenticated vs anonymous users (24-hour retention), and pairing with other tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with main purpose, followed by usage guidance, behavioral details, and pairing. Every sentence is necessary and well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Does not mention return value. Since no output schema exists, the description should explain what the tool returns (e.g., success confirmation or key). This omission creates a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers 100% of parameters; description adds practical examples and usage context (e.g., 'a resolved ticker, a target address'), going beyond the schema's generic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Save data' and the resource 'memory'. Distinguishes from siblings like recall and forget by describing the full trio.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises 'Use when you discover something worth carrying forward' and contrasts with recall and forget. Does not explicitly state when not to use, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityARead-onlyInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint: true, and the description confirms read-only behavior. It adds transparency by stating the return type includes IDs and 'pipeworx://' citation URIs, which is valuable beyond the annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at about 100 words, front-loaded with the core purpose, followed by usage guidance, concrete examples, and return information. Every sentence is meaningful and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description thoroughly covers the tool's purpose, when to use it, examples for both entity types, and what it returns (IDs, citation URIs, and additional info like ingredient and brand). It is complete given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description repeats examples already in the schema (e.g., 'ticker (AAPL)') but does not add new semantic meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Look up the canonical/official identifier for a company or drug,' using a specific verb and resource. It distinguishes itself from sibling tools by specifying that it replaces multiple lookup calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers,' providing clear guidance on when to use and the sequence relative to other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_activitiesARead-onlyInspect
Combined activity feed across Bundestag and Bundesrat.
| Name | Required | Description | Default |
|---|---|---|---|
| num | No | 1-200 (default 50) | |
| query | No | Free-text — German recommended | |
| cursor | No | Pagination cursor from prior page | |
| format | No | json (default) | xml | |
| date_to | No | YYYY-MM-DD | |
| ressort | No | Ministerium (e.g. "BMI") | |
| date_from | No | YYYY-MM-DD | |
| descriptor | No | GND subject descriptor (e.g. "Klimaschutz") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the description does not need to restate that. However, it adds no behavioral context beyond the brief purpose, such as pagination behavior, return format details, or any rate limits. With annotations covering the safety profile, a score of 3 is appropriate for not contradicting but also not enriching.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, short sentence that front-loads the core purpose. While concise, it could benefit from slightly more context given the tool's complexity (8 optional parameters), but it avoids unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema and 8 parameters, yet the description provides no information about the return structure, what constitutes an 'activity,' or how to effectively use the parameters. This leaves significant gaps for an agent to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 8 parameters have descriptions in the input schema (100% coverage), so the schema already explains each one. The description adds no extra meaning or usage tips beyond the schema, meeting the baseline of 3 for high-coverage scenarios.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a 'combined activity feed across Bundestag and Bundesrat,' specifying both the resource (activity feed) and scope (two chambers). Among sibling search tools, this distinguishes itself by focusing on activities rather than specific document types like search_drucksachen or search_plenarprotokolle.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no explicit guidance on when to use this tool versus alternatives (e.g., search_drucksachen for documents, search_persons for people). It implies use for a combined activity feed, but fails to mention exclusions or prerequisites, leaving the agent to infer from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_drucksachenCRead-onlyInspect
Search printed documents (bills, motions, answers).
| Name | Required | Description | Default |
|---|---|---|---|
| num | No | ||
| query | No | ||
| cursor | No | ||
| date_to | No | ||
| date_from | No | ||
| drucksachentyp | No | Antrag | Gesetzentwurf | Beschlussempfehlung | Kleine Anfrage | … |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations include 'readOnlyHint: true', which implies read-only behavior. The description only says 'Search', which is consistent, but it does not disclose any additional behavioral traits such as pagination (cursor parameter), result limits, or data freshness. With annotations already providing the safety profile, the description adds no extra transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is short, but it omits essential details about parameters and usage. This is under-specification rather than effective conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of 6 parameters (including date range and cursor) and no output schema, the description fails to provide enough context for an agent to use the tool correctly. It does not explain the return format, pagination behavior, or how to combine parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 6 parameters but only 17% (drucksachentyp) have a description. The tool description does not explain any parameters; it only gives examples of document types, which loosely relates to drucksachentyp. It fails to clarify the meaning of 'num', 'query', 'cursor', 'date_from', 'date_to'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search printed documents' with specific examples (bills, motions, answers), making the purpose easy to understand. It distinguishes from siblings like 'get_drucksache' which retrieves a single document, and 'search_plenarprotokolle' which searches plenary protocols.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No information is provided about when to use this tool versus alternatives. The description does not mention limitations, prerequisites, or contrast with other search tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_personsARead-onlyInspect
Search people referenced in DIP (members, ministers, witnesses).
| Name | Required | Description | Default |
|---|---|---|---|
| num | No | ||
| query | No | Name fragment | |
| cursor | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so description need not repeat. It adds value by specifying the scope (DIP people types), but lacks details on pagination or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, concise sentence that front-loads the purpose with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 3 parameters and no output schema, the description lacks essential context such as pagination behavior, parameter roles (num likely page size, cursor for pagination), or result format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 33% (query has description). The overall description does not explain 'num' or 'cursor' parameters, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and resource 'people referenced in DIP', listing specific categories (members, ministers, witnesses), distinguishing it from sibling tools like search_activities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching people in DIP but provides no guidance on when to use this over alternatives like search_activities or search_drucksachen.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_plenarprotokolleDRead-onlyInspect
Plenary meeting transcripts.
| Name | Required | Description | Default |
|---|---|---|---|
| num | No | ||
| query | No | ||
| cursor | No | ||
| date_to | No | ||
| date_from | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds no behavioral context beyond the readOnlyHint annotation. It does not disclose pagination, filtering, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
At 3 words, the description is severely under-specified. It offers no structure or useful information, failing to earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 undocumented parameters, no output schema, and multiple siblings, the description is completely inadequate. It provides no context for invoking the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% and the description does not explain the meaning or usage of any of the 5 parameters (num, query, cursor, date_to, date_from). The agent cannot infer parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Plenary meeting transcripts.' lacks a verb, making it unclear whether this tool searches, lists, or retrieves transcripts. It does not distinguish from the sibling 'get_plenarprotokoll', which likely retrieves a single transcript.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'search_drucksachen' or 'get_plenarprotokoll'. There is no mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimARead-onlyInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the tool is safe. The description adds significant behavioral context beyond annotations: it specifies the data source (SEC EDGAR + XBRL), the limited domain (v1 supports company-financial claims), and the specific verdict types returned (confirmed, approximately_correct, etc.). It also explains the output includes citation and percent delta. No behavioral aspects are hidden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: three sentences covering purpose, usage, domain, and return value. It is front-loaded with the core action and uses no filler. Every sentence adds essential information, earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description is fully complete. It explains the purpose, usage context, supported domain, and return structure (verdict, structured form, actual value, citation, percent delta). The agent has all the information needed to select and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema (100% coverage) already describes the 'claim' parameter. The description adds key semantics: it defines the type of claims supported ('company-financial claims (revenue, net income, cash position for public US companies)') and the data source. This helps the agent craft appropriate claims. The description compensates for the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources.' It specifies the resource (factual claims) and uses a specific verb. The tool is distinguished from siblings like 'ask_pipeworx' by focusing on verification and providing structured verdicts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool: 'Use when an agent needs to check whether something a user said is true...' and provides example trigger phrases. It also defines the domain (company-financial claims for US public companies via SEC EDGAR + XBRL), implicitly indicating when not to use it (for non-financial claims). The mention that it replaces 4-6 sequential calls adds efficiency guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!