Epo Ops
Server Details
EPO Open Patent Services MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-epo-ops
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 16 of 16 tools scored. Lowest: 2.8/5.
Most tools have distinct purposes, but 'ask_pipeworx' overlaps with many specific tools by providing a catch-all natural language interface. While descriptions clarify the intended use (ask_pipeworx for automatic routing, specific tools for precision), an agent might still be uncertain which to choose for a given query.
Tool names consistently use snake_case and are generally descriptive, with a mix of imperative verbs ('remember', 'validate_claim') and 'get_' prefix for patent tools. Minor inconsistency exists (e.g., 'recent_changes' is a noun phrase, 'pipeworx_feedback' includes the product name), but overall the pattern is predictable.
With 16 tools, the server covers a broad domain (company research, patents, memory, feedback, tool discovery) without being overwhelming. Each tool serves a clear role, and the count feels well-scoped for the intended use case.
The tool set provides comprehensive coverage for exploring companies, patents, and economic data, along with memory and feedback utilities. Key operations (search, retrieve, compare, validate, persist) are present, and there are no obvious missing capabilities for data retrieval and analysis.
Available Tools
16 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It explains the routing behavior (picks tool, fills arguments, returns result) but lacks details on error handling, limitations, or what happens if the question cannot be answered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficient and front-loaded with the core purpose, including examples and a list of sources. Minor verbosity but overall concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter, no output schema, and no annotations, the description adequately covers the tool's purpose, usage, and behavior with examples. It does not explain return format but that is acceptable for this type of tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'question' has a description in the schema that mirrors the description. Since schema coverage is 100%, the description adds no extra semantic value beyond what the schema provides, earning a baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by automatically routing to the right data source, listing many specific sources. It distinguishes itself from sibling tools like search_patents and compare_entities by being a general query tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when a user asks "What is X?"...' and instructs the agent to avoid figuring out which tool to call. It does not explicitly state when not to use it, but the examples and context imply that for specific tasks, other siblings are more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, description adds significant behavioral detail: data sources (SEC EDGAR/XBRL for company; FAERS, FDA, trials for drug), specific metrics pulled, and return format (paired data + citation URIs). Missing error handling or response structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph of ~150 words, front-loaded with main purpose. Every sentence provides unique value. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, trigger phrases, data sources, and return format adequately for an agent to invoke correctly. Lacks explicit output structure details, but complexity is moderate and description is sufficient without output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers enum and array with descriptions. Description enriches by explaining what each type retrieves (e.g., 'revenue, net income, cash, long-term debt' for company) and clarifies value formats (tickers/CIKs, drug names).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Compare 2–5 companies (or drugs) side by side in one call' with specific verb and resource. It differentiates from siblings by noting it replaces 8–15 sequential calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit trigger phrases ('compare X and Y', 'X vs Y', 'how do X, Y, Z stack up', 'which is bigger') and use cases (tables/rankings). Implicitly advises against using for single entities.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Although no annotations exist, the description clearly indicates it is a read-only search tool that returns tool names and descriptions. It does not mention any side effects or destructive actions, which is appropriate for its function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and includes examples, but the list of domains (SEC filings, financials, etc.) is somewhat lengthy. However, every sentence adds useful context, and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains returns ('top-N most relevant tools with names + descriptions'). It could mention ordering or pagination, but for a simple discovery tool, the current detail is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds value by clarifying that 'query' is a natural language description and 'limit' has default (20) and max (50) constraints, which the schema already specifies but the description reinforces.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Find tools') and resource ('by describing the data or task'), and distinguishes from sibling tools by positioning itself as a discovery/browsing tool, not for direct task execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Call this FIRST when you have many tools available and want to see the option set', and provides concrete examples of when to use it (SEC filings, FDA drugs, etc.), which contrasts with sibling tools like search_patents or resolve_entity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description details what data is returned (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. Implies read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences front-loaded with purpose and usage. No unnecessary words. Every sentence provides value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking output schema, the description comprehensively covers return data, parameter usage, and context. No gaps for intended use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds contextual meaning: explains type enum constraint, value formats (ticker or zero-padded CIK), and clarifies that names are not supported with example values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get everything about a company in one call' and distinguishes from siblings by mentioning alternative approaches like calling 10+ pack tools. It also provides specific user query examples.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (when user asks for company profile) and implies limitations (names not supported, use resolve_entity first). Could be more explicit about when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavior. It states deletion but does not mention irreversibility or error handling (e.g., key not found). Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, then usage context. Every word is necessary; no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (one param, no output schema). Description covers purpose, usage context, and sibling relationship. Missing explicit error behavior but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter description 'Memory key to delete'. The description repeats 'by key' without adding new semantics beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Delete a previously stored memory by key' with specific verb and resource. It distinguishes from siblings by mentioning 'pair with remember and recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use: 'Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier.' Implies alternatives via sibling mention.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_abstractCInspect
Abstract text for a patent.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | Patent number, epodoc format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description does not disclose behavioral details such as return format, limitations, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise, but lacks completeness. Could be improved without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, yet description does not mention what is returned (e.g., raw text). Incomplete for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema completely describes the single parameter (number, epodoc format). Description adds no further meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource: 'Abstract text for a patent.' Indicates the tool retrieves the abstract. However, does not differentiate from siblings like get_claims or get_biblio beyond the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. No context on prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_biblioAInspect
Bibliographic data for a patent — title, inventors, applicants, dates, classifications.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | Patent number in epodoc format, e.g. "EP1234567" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must disclose behavioral traits. It only lists what data is returned, but does not mention read-only nature, authorization needs, or any side effects. Limited transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that front-loads the purpose and lists key fields. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one parameter and no output schema, the description explains what data is returned (title, inventors, etc.). It could mention that it returns multiple fields or that it is a read operation, but it is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage, so baseline is 3. Description does not add meaning beyond the schema; the parameter 'number' is sufficiently described in the schema. No extra context given.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'bibliographic data for a patent' and lists specific fields (title, inventors, applicants, dates, classifications). This distinguishes it from siblings like get_abstract, get_claims, and get_family.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Usage is implied by the description, but no when-not-to or alternative suggestions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_claimsCInspect
Claims text for a patent.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | Patent number, epodoc format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden but only states 'Claims text for a patent.' It does not disclose whether the tool requires specific authentication, whether it handles different patent number formats beyond epodoc, or what the response looks like (e.g., plain text, JSON).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is efficient. However, it is so terse that it sacrifices critical context; a slightly longer description clarifying the output format would be more helpful without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description should indicate what the tool returns (e.g., a string, a list of claims). It only says 'Claims text', leaving the agent uncertain about the structure or whether it includes dependent claims, formatting, etc.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% since the single parameter 'number' is documented as 'Patent number, epodoc format'. Baseline is 3, and the description adds no additional meaning beyond the schema, so it remains at 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Claims text for a patent.' clearly indicates the tool retrieves the claims of a patent. However, it does not specify the format (e.g., raw text, structured claims) or whether it returns all claims or a summary, which would further clarify its purpose versus siblings like get_abstract.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it does not suggest using this when claims text is needed, as opposed to get_abstract for abstract or get_biblio for bibliographic data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_familyCInspect
INPADOC family — related patent applications worldwide for the same underlying invention.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | Patent number, epodoc format |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations empty, so description carries the burden. It states a read operation but lacks details on what fields are returned, pagination, or limitations (e.g., number of results). Adequates only the basic concept.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise and front-loads the core purpose. No redundancy, but slightly too terse for full value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With one parameter and no output schema, description adequately conveys the tool's function but lacks information about output format or behavior. Could be more complete with examples or return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter 'number' is fully described in the schema. Description adds no extra meaning beyond schema. Baseline 3 applies due to 100% schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves INPADOC family, which is a well-known concept of related patent applications. The verb 'get' and resource 'family' are specific, and it distinguishes from siblings like get_abstract by focusing on family relationships.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs siblings like search_patents or get_biblio. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description thoroughly discloses behavioral traits: rate-limited to 5 per identifier per day, free usage, and that feedback is reviewed daily and influences the roadmap. It does not contradict any annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured. It front-loads the purpose, then provides detailed usage guidelines and behavioral notes. Every sentence adds value without unnecessary filler, making it efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters with nested objects and enums) and no output schema, the description covers all necessary information: parameter meanings, usage contexts, rate limits, and behavioral effects. An agent can fully understand how to use the tool without needing additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so baseline is 3. The description adds further meaning by explaining the enum values, the optional context fields, and the message format (be specific, 1-2 sentences, 2000 chars max), providing clarity beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to tell the Pipeworx team about bugs, feature requests, data gaps, praise, or other issues. It uses a specific verb ('tell') and distinguishes itself from sibling tools like search and retrieval tools by focusing on feedback.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly specifies when to use the tool: for bug reports, feature/data gap requests, or praise. It provides clear usage instructions (describe issues in terms of Pipeworx tools/packs, avoid pasting end-user prompts) and mentions rate limits and quota-free usage, making it easy for an agent to decide when to invoke it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behaviors. It states scoping (by identifier) and implies a read-only, idempotent operation. It lacks details on missing key behavior or rate limits, but the tool is simple.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with no redundancy. Action is front-loaded, and every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter retrieval tool, the description covers purpose, usage, scoping, and alternative actions (remember/forget). No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description. The tool description adds value by providing usage examples and scoping context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a saved value by key or lists all keys, with concrete examples (ticker, address, notes). It distinguishes itself from siblings 'remember' (save) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use (look up stored context without re-deriving) and how to list keys by omitting the argument. It pairs with remember/forget, but does not explicitly state when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses parallel fan-out to three sources, return structure (structured changes, count, URIs), and since parameter formats. It lacks details on limitations, auth, or error conditions, but is fairly transparent for a simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph. It front-loads the purpose and includes concise examples, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters and no output schema, the description adequately covers functionality, return values, and usage. It does not mention error handling or limitations, but overall it is complete for its complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining type is limited to 'company', since accepts ISO or relative formats with examples, and value is ticker or CIK. This complements the schema well without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'What's new with a company in the last N days/months?' It specifies the verb 'fans out' and the resources (SEC EDGAR, GDELT, USPTO), making it distinct from siblings like entity_profile or search_patents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage examples ('when a user asks...') and tips for the since parameter. However, it does not explicitly state when not to use this tool or mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses scoping by identifier, persistence differences (authenticated vs anonymous, 24-hour retention), and pairing with recall/forget. This is sufficient for a simple save operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each serving a clear purpose: main action, when to use, scoping/retention, and pairing. No fluff, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 required params, no output schema), the description covers purpose, usage context, scoping, retention, and relationship to siblings. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but description adds value with usage examples for key (e.g., 'subject_property', 'target_ticker') and explains value as 'any text — findings, addresses, preferences, notes'. This goes beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Save', resource 'data', and context 'across this conversation or across sessions'. It distinguishes from sibling tools 'recall' and 'forget' by explicitly naming them as partners.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context: 'Use when you discover something worth carrying forward... so you don't have to look it up again.' Also references paired tools, though no explicit when-not-to-use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full behavioral disclosure burden. It mentions that the tool returns IDs plus citation URIs, but does not cover error behaviors, authentication requirements, rate limits, or what happens if the entity is not found. This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and includes examples and usage guidance. It is efficient but could be slightly tightened; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately explains return values (IDs plus citation URIs). It covers both company and drug cases, but could mention handling of ambiguous or missing inputs for higher completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds significant value by explaining the 'type' and 'value' parameters with concrete examples (e.g., 'Apple' → AAPL, 'Ozempic' → RxCUI), clarifying acceptable formats beyond the schema's enum and string descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: looking up canonical identifiers for companies or drugs. It lists specific identifier types (CIK, ticker, RxCUI, LEI) and provides concrete examples, distinguishing it from siblings by noting it replaces 2–3 lookup calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool ('when a user mentions a name and you need the CIK...') and advises using it before other tools that need official identifiers. It does not explicitly state when not to use it, but the guidance is clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_patentsAInspect
Search published patents using CQL (EPO Common Query Language). Returns a paginated list of publication numbers matching the query. Example queries: "ta=hydrogen", "in=Tesla", "pa=apple", "txt=neural network AND pd>=2020".
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | EPO CQL — fields: ta (title-abstract), ti (title), ab (abstract), txt (any text), in (inventor), pa (applicant), cl (classification), pn (publication number), ap (application number), pr (priority), pd (publication date), ad (application date). | |
| range | No | Result range "start-end" (1-indexed). Max 100 per page. Default "1-25". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only describes the basic operation as a search returning a list, but fails to disclose any behavioral traits like read-only nature, authentication needs, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences plus examples, front-loaded with purpose, and every sentence is informative without any wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 2 parameters and no output schema, the description adequately covers what the tool does and how to use it, though it could mention prerequisites or the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by providing example CQL queries and clarifying the range parameter format (1-indexed, max 100 per page), which aids understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'published patents', and the method 'CQL'. It effectively distinguishes from sibling tools like get_abstract or get_claims which retrieve specific documents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for searching patents with CQL and returns a paginated list, but does not explicitly state when not to use this tool or directly mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return types (verdict values: confirmed, approximately_correct, refuted, inconclusive, unsupported), extracted structured form, actual value with citation, and percent delta. It also notes that the tool replaces 4-6 sequential calls, indicating efficiency benefits. No destructive or mutating behavior is implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized at about 4-5 sentences, front-loading the main purpose and usage guidance. Each sentence adds value: purpose, usage triggers, supported domain, and output summary. There is no fluff, and the structure is logical.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description adequately explains the return value (verdict, extracted data, citation, delta). It covers the domain (company-financial claims) and mentions the capability of replacing multiple sequential calls. However, it does not address potential error conditions or rate limits, which would be helpful for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'claim' has 100% schema description coverage, but the description adds value by providing illustrative examples like 'Apple's FY2024 revenue was $400 billion' and clarifying that it should be a natural-language factual claim. This helps the agent understand the expected format beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-checking, verifying, validating, or confirming/refuting a natural-language factual claim against authoritative sources. It provides specific examples of trigger phrases like 'Is it true that…?' and distinguishes the tool from siblings such as 'ask_pipeworx' or 'compare_entities' by emphasizing its unique role in validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool: when an agent needs to check the truth of a user's statement and provides trigger phrases. It also clarifies the supported domain (company-financial claims via SEC EDGAR + XBRL) and notes the version scope, implying limitations for non-financial claims. However, it does not explicitly name alternative tools or when not to use it beyond the domain scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!