Openreview
Server Details
OpenReview MCP — ML conference submissions and reviews (API v2)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-openreview
- GitHub Stars
- 0
- Server Listing
- mcp-openreview
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 17 of 17 tools scored. Lowest: 2.7/5.
Each tool has a clearly distinct purpose: routing (ask_pipeworx), comparison (compare_entities), discovery (discover_tools), profiling (entity_profile), recent changes (recent_changes), entity resolution (resolve_entity), claim validation (validate_claim), memory operations (remember/recall/forget), and Openreview queries (get_note, get_paper, etc.). There is no functional overlap; even the broad ask_pipeworx is a router rather than a direct competitor.
Most tools follow a verb_noun pattern (e.g., compare_entities, resolve_entity, get_paper), but several deviate: entity_profile and recent_changes are noun phrases, and pipeworx_feedback is a noun_noun compound. The memory verbs (remember, recall, forget) are imperative, which is consistent among themselves but differs from the rest.
With 17 tools, the number is slightly above the ideal range but still manageable. The server covers two distinct domains (Openreview and Pipeworx data aggregation) and each tool serves a specific function. The count could be reduced by merging a few (e.g., entity_profile and recent_changes), but it is not excessive.
For the Openreview domain, the tool set provides read access (get, list, search) but lacks create/update/delete operations, which may be intentional. For the Pipeworx domain, it covers a wide range: routing, comparison, entity resolution, claim validation, recent changes, and feedback. Minor gaps include direct access to raw SEC filings or FRED series, but the routing tool can handle those.
Available Tools
17 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It explains the routing behavior and supported sources but does not disclose error handling, permission requirements, or whether the operation is read-only. This is a moderate gap for a tool that could access sensitive data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: first sentence states the core purpose, second gives usage cues, third lists sources, and fourth provides examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and no output schema, the description thoroughly explains the tool's role as a routing layer. It could briefly mention the output format (e.g., raw answer or structured data), but it already says 'returns the result', which is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter 'question' exists with schema coverage 100%. The description adds significant context by listing example questions and types of queries, enhancing understanding beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by routing to the appropriate data source. It lists multiple domains (SEC EDGAR, FRED, etc.) and provides examples, distinguishing it from sibling tools that are specific operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use ('Use when a user asks...') and when not to ('you don't want to figure out which Pipeworx pack/tool to call'). It gives concrete examples of appropriate queries, providing clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return data (paired data + citation URIs) and source (SEC EDGAR/XBRL, FAERS). Does not mention rate limits, auth needs, or side effects, but as a read operation, it's adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient single paragraph with logical flow: core purpose, usage triggers, per-type details, and value proposition. No redundant sentences; every sentence contributes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains return values (paired data + citation URIs) and data sources. With only two simple parameters, this fully equips an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with basic descriptions. The description adds meaningful detail: for type 'company' it lists financial metrics and sources; for 'drug' it lists adverse-event counts, approvals, trials. This goes beyond the enum labels.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it compares 2-5 companies or drugs side by side, using specific verbs like 'compare' and highlighting use cases such as tables/rankings. It distinguishes from alternatives by noting it replaces 8-15 sequential agent calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when-to-use triggers like 'compare X and Y' or 'X vs Y', and specifies what each type pulls. Lacks explicit when-not-to-use or alternative tool names, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. The description discloses the output format (names + descriptions), default limit (20), and max limit (50). It explains the input parameters adequately. Missing details about edge cases or rate limits, but for a discovery tool, the behavior is sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph but is well front-loaded with the main purpose. Every sentence adds value, including a list of domains and a strong usage hint. It is concise enough but could be slightly more structured (e.g., bullet points).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers the return value (names+descriptions) and explains input parameters and usage. It addresses the main use case but lacks discussion of edge cases or error handling. Overall sufficient for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds value by specifying the default and max for 'limit' and emphasizing that 'query' is a natural language description. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It lists specific domains (SEC filings, financials, etc.) and emphasizes that it returns the top-N most relevant tools. It distinguishes from sibling tools by recommending calling this FIRST to see the option set.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use when you need to browse, search, look up, or discover what tools exist for...' and 'Call this FIRST...' providing clear context. However, it does not explicitly state when not to use it or list alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It details the return data (SEC filings, fundamentals, patents, news, LEI) and citation format. While it could mention call limits or side effects, the core behavioral traits are well explained for a read-only information retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first states a compelling purpose, and the second provides usage context, return contents, and format. Every sentence serves a purpose, and the most critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool that aggregates data from multiple sources, the description lists all key data types returned. Without an output schema, this is sufficient for the AI to understand what to expect. Missing details like pagination or error handling are minor given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by specifying that 'type' is currently limited to 'company' and that 'value' accepts ticker or CIK but not names, with a clear redirection to 'resolve_entity'. This goes beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies a clear verb+resource ('Get everything about a company in one call') and distinguishes itself from siblings by explicitly noting that it eliminates the need for multiple pack tools and that name resolution requires the sibling 'resolve_entity'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists use cases ('tell me about X', 'give me a profile', etc.) and provides a direct exclusion: 'Names not supported — use resolve_entity first'. This clearly guides the AI on when and how to invoke the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description clarifies it's a delete operation. Could mention whether missing keys cause errors, but overall sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (one required string) and no output, the description fully equips the agent to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all parameters (100% coverage) with description 'Memory key to delete'. The tool description adds no further semantic value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states the tool deletes a stored memory by key. Distinguishes from siblings (remember, recall) by specifying deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear use cases: when context is stale, task done, or clearing sensitive data. Mentions pairing with remember and recall for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_noteBInspect
Single note — paper, review, comment, decision, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | OpenReview note id (e.g. "abc123XYZ") | |
| details | No | Comma-sep extras: replies, original, revisions, edges |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses no behavioral traits such as read-only nature, authentication requirements, rate limits, or error responses. The description is too sparse to inform safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence, front-loaded with 'Single note' for quick understanding. No wasted words, though slightly under-specified for full usability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimal description for a simple fetch tool with 2 parameters and no output schema. Lacks details on return format, error handling, or examples. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameter descriptions, so the description adds no additional meaning beyond 'notes of various types'. Baselined at 3 as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a single note, listing possible types (paper, review, comment, decision). It effectively distinguishes from sibling tools like get_paper (specific paper retrieval) and search_notes (listing notes).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description does not mention when not to use it or provide context for selection, leaving the agent to infer from the tool name and siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_paperAInspect
Paper + all child notes (reviews, rebuttal, decision, metareview). Pass the forum id (= paper note id).
| Name | Required | Description | Default |
|---|---|---|---|
| forum_id | Yes | Forum (paper) note id |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses that the tool retrieves a paper plus its child notes, a read operation. No mention of auth or rate limits, but for a simple retrieval the behavior is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences convey essential information without any fluff, front-loading the return structure and the required parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks details about return format, error cases (e.g., paper not found), or how it differs from get_note. For a single-parameter tool, more context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage of forum_id is 100% and description only restates the schema's hint that it is the paper note id, adding no extra semantic value beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns a paper and all child notes (reviews, rebuttal, decision, metareview), distinguishing it from sibling get_note which likely returns a single note.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Tells user to pass the forum id but does not specify when to use this tool versus alternatives like get_note or search_notes. No explicit when-not or use case guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_venueBInspect
Venue (group) metadata by group id.
| Name | Required | Description | Default |
|---|---|---|---|
| group_id | Yes | Group id (e.g. "ICLR.cc/2024/Conference") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It mentions 'metadata' but does not specify what it includes or any side effects (likely none). Limited transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short and to the point, but it is a phrase lacking a complete sentence structure. Still efficient given the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, and the description does not explain what 'metadata' consists of. Given the single parameter, the description could offer more details about return values or usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the schema includes an example for group_id. The description adds 'metadata' but not additional semantics beyond what the schema provides. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves venue metadata by group id, using a specific verb and resource. It distinguishes itself from sibling 'list_venues' which likely returns a list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'list_venues', nor any context about prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_submissionsCInspect
Papers submitted to a venue. Pass the venue group id as venue_id.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | cdate (creation date, default desc) | tmdate (modify date) | number | |
| limit | No | 1-1000 (default 25) | |
| offset | No | 0-based offset | |
| venue_id | Yes | Venue group id (e.g. "ICLR.cc/2024/Conference") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavioral traits. It does not mention default sort order, pagination, or error behavior. The description only restates what the input schema already indicates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very concise single sentence, but lacks structure such as explicit sections or usage examples. Front-loaded with purpose, but could benefit from more details without adding fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks completeness given there is no output schema. Does not describe return format, default pagination, or any constraints. For a tool with 4 parameters and no annotations, the description is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no additional parameter semantics beyond noting venue_id is a venue group ID, which is already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states that the tool lists papers submitted to a venue, and identifies the required venue_id parameter. However, it does not differentiate from sibling tools like search_notes or get_paper, but the intent is clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives such as search_notes or get_venue. Only instructs to pass venue_id, which is a parameter requirement, not usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_venuesAInspect
List venue groups (conferences, workshops). Use the returned group id to query submissions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-1000 (default 50) | |
| query | No | Free-text filter on group id / name | |
| offset | No | 0-based offset |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations; description implies read-only but does not confirm. Lacks mention of pagination or result limits beyond parameter descriptions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, first front-loads purpose, second adds usage hint. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks return value details (e.g., fields per venue) but with 3 parameters and simple list, mostly complete. Could mention pagination via offset.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all parameters with descriptions (100% coverage). Description adds no extra meaning beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'List', resource 'venue groups', and scope 'conferences, workshops'. Differentiates from sibling 'get_venue' and 'list_submissions' by specifying group-level listing and linking to submissions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage pattern: list groups then use id for submissions. No explicit when-not or alternatives, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses rate limits (5/identifier/day) and that it doesn't consume tool-call quota. Could mention if any response is returned after submission, but the provided info is sufficient for a feedback tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph with all key info front-loaded: purpose, usage, limitations. Every sentence serves a purpose. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple feedback tool with no output schema, the description covers what, when, and how. Could mention if the user gets an acknowledgment, but otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% but the description adds context: explains the enum values for 'type' beyond schema descriptions, and gives guidance on writing 'message' (be specific, 1-2 sentences). Adds value without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States clearly that the tool is for telling the Pipeworx team about bugs, missing features, or praise. Explicitly distinguishes from sibling tools by specifying when to use it (wrong data, missing tool, praise).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenarios (bug, feature/data_gap, praise) and what not to do (don't paste end-user's prompt). Mentions rate limits, free usage, and that the team reads digests daily, which guides the agent on impact.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses scoping to the agent's identifier and mentions dual behavior (single key retrieval vs. listing all keys). Slightly lacks explicit statement that it is read-only, but adequate for a retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four concise sentences front-loading the core action and use cases, with zero wasted words. Effectively organized for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (one optional param) and no output schema, the description covers purpose, usage patterns, examples, scoping, and sibling relationships. No obvious gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for the sole parameter; the description reinforces the schema without adding significant new semantic detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves values saved via 'remember', with an alternate mode to list all keys when argument omitted. Distinguishes from sibling tools 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides concrete use cases (look up target ticker, address, prior notes) and contrasts with 'remember' and 'forget' tools, giving clear context for when to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It explains the tool fans out to SEC EDGAR, GDELT, and USPTO in parallel, and details the return structure (structured changes, count, URIs). This provides sufficient behavioral context for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single block but is well-organized: purpose, usage, behavior, parameter details, return format. It is front-loaded with the most important information. Slightly dense but efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 3 parameters all covered, no output schema. The description explains return values (structured changes, count, URIs). It lacks details on pagination or limits but covers the core functionality well given the complexity of parallel fan-out.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3. The description adds value by explaining the 'since' parameter accepts ISO dates or relative shorthand with concrete examples ('7d', '30d', '3m', '1y'), recommends a default ('30d or 1m for typical monitoring'), and clarifies that 'value' can be a ticker or CIK. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: finding recent changes for a company. It uses a specific verb ('What's new') and identifies the resource ('company'). The tool is distinct from siblings like entity_profile or search_notes, as it fans out to multiple external sources for recent updates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage examples ('when a user asks...') and common query patterns. It implies the tool is for monitoring changes but does not include when-not-to-use or explicit alternatives, though the clear examples compensate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses persistence behavior: scope by identifier, authenticated vs anonymous sessions, 24h retention for anonymous. No annotations provided, so description carries full burden effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with front-loaded purpose. No filler, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but store operations typically don't need extensive return docs. Could mention overwrite behavior, but overall complete given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already provides 100% coverage with descriptions. Description adds examples of key patterns and notes value is any text, further clarifying usage context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Save data the agent will need to reuse later' and provides concrete examples (ticker, address, preference). Distinguishes from sibling tools recall and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when you discover something worth carrying forward' and tells agent to pair with recall and forget.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It discloses that it returns IDs plus citation URIs and mentions the ID systems. While it doesn't cover error handling or rate limits, it adequately describes the core behavior for a read-only lookup.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loaded with the main action, and every sentence provides useful information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers return types (IDs and citation URIs) and ID systems. It is largely complete for a simple lookup tool, though it could mention behavior for unrecognized inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description enriches with concrete examples (e.g., 'Apple → AAPL / CIK 0000320193') and clarifies acceptable value formats, adding value beyond the schema's brief descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: lookup canonical/official identifiers for companies or drugs. It provides specific examples and ID systems (CIK, ticker, RxCUI, LEI), distinguishing it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use ('when a user mentions a name and you need the CIK...') and ordering ('Use this BEFORE calling other tools'), and states it replaces multiple lookups, giving clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_notesCInspect
Full-text search across notes.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | 1-1000 (default 25) | |
| query | Yes | Free-text query | |
| offset | No | 0-based offset | |
| signature | No | Filter by signature group (e.g. author profile id) | |
| content_field | No | Restrict to a content field (e.g. "title", "abstract") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only says 'search,' implying read-only, but fails to mention pagination (limit/offset), performance, required permissions, or any side effects. The description is insufficient for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, but it is under-specified and does not convey necessary context. Every sentence should add value; this one is too vague to be useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters, no output schema, and no annotations, the description should be more complete. It does not explain how parameters work together or what the return format looks like, leaving significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 5 parameters. The description adds no additional meaning beyond the schema, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Full-text search across notes,' which clearly identifies the action (search) and resource (notes). It is specific enough to distinguish from 'get_note' but does not explicitly differentiate from 'ask_pipeworx' or other potential search tools, so a 4 is appropriate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidance is provided. There is no information on when to use this tool over alternatives like 'get_note' or 'ask_pipeworx,' nor any prerequisites or typical scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: it returns a verdict (confirmed, approximately_correct, etc.), actual value with citation, and percent delta. It also mentions underlying data sources and that it replaces multiple sequential calls, giving insight into efficiency and limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is comprehensive but not overly long. It is front-loaded with the main purpose, then includes examples, scope, and return format. Each sentence contributes meaningful information, though minor trimming could improve conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (one parameter, no output schema) and no annotations, the description covers essential aspects: supported claims, verdict types, output details, and efficiency benefits. It lacks information on error handling or rate limits, but is otherwise complete for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single 'claim' parameter is described in the schema, and the description adds value by providing concrete examples and specifying the type of claims (e.g., revenue, net income). This enhances understanding beyond the basic schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool as fact-checking natural-language claims against authoritative sources, specifying it handles company-financial claims via SEC EDGAR + XBRL. This distinguishes it from sibling tools like ask_pipeworx and compare_entities, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool (checking truth of user statements) with examples of triggering queries. It also defines the supported scope (company-financial claims). While it doesn't provide negative usage cases or explicit alternatives, the context is sufficiently clear for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.