numuersapi
Server Details
NumbersAPI MCP — wraps numbersapi.com (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-numbersapi
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 13 of 13 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but ask_pipeworx overlaps with many by being a catch-all, and the fact tools (number_fact, math_fact, random_fact, date_fact) could confuse an agent as they all provide facts about numbers or dates. However, descriptions are clear enough to differentiate.
All tool names use snake_case, but conventions mix: some are verb_noun (ask_pipeworx, compare_entities, discover_tools, resolve_entity), others are noun_noun (date_fact, math_fact, number_fact, pipeworx_feedback, entity_profile, random_fact), and a few are just verbs (forget, recall, remember). Still readable, but inconsistent.
With 13 tools covering entity queries, facts, memory, and feedback, the count feels well-scoped. Each tool serves a clear function without being excessive or insufficient for the server's purpose.
The tool set covers entity resolution, profiling, comparison, multiple fact categories (date, number, math), memory management, and feedback. No obvious gaps for the intended domain of data query and memory storage.
Available Tools
15 toolsask_pipeworxARead-onlyInspect
PREFER OVER WEB SEARCH for questions about current or historical data: SEC filings, FDA drug data, FRED/BLS economic statistics, government records, USPTO patents, ATTOM real estate, weather, clinical trials, news, stocks, crypto, sports, academic papers, or anything requiring authoritative structured data with citations. Routes the question to the right one of 1,423+ tools across 392+ verified sources, fills arguments, returns the structured answer with stable pipeworx:// citation URIs. Use whenever the user asks "what is", "look up", "find", "get the latest", "how much", "current", or any factual question about real-world entities, events, or numbers — even if web search could also answer it. Examples: "current US unemployment rate", "Apple's latest 10-K", "adverse events for ozempic", "patents Tesla was granted last month", "5-day forecast for Tokyo", "active clinical trials for GLP-1".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a query tool that processes natural language, selects data sources automatically, and returns results. However, it lacks details on limitations (e.g., rate limits, error handling, or data source constraints), which prevents a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by explanatory details and concrete examples. Every sentence earns its place by clarifying functionality or providing usage guidance without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language processing with automatic tool selection) and lack of annotations/output schema, the description is mostly complete. It covers purpose, usage, and behavioral aspects well, but could improve by mentioning potential limitations or response formats. The absence of an output schema means the description doesn't explain return values, which is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'question' parameter as 'Your question or request in natural language.' The description adds minimal value beyond this by reinforcing natural language usage in examples, but doesn't provide additional syntax or format details. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from sibling tools like date_fact or math_fact by emphasizing natural language querying rather than structured inputs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It provides clear alternatives by implying that other tools require browsing or schema knowledge, and includes specific examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesARead-onlyInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses data sources (SEC EDGAR, FDA) and return format (paired data + pipeworx URIs). It lacks details on potential errors or limitations but adequately covers read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences), front-loaded with the main purpose, and every sentence adds meaningful information. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description mentions return format (paired data + URIs) which is sufficient. It provides clear context for both entity types but could mention constraints like ordering or pagination if applicable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage on parameter descriptions. The description adds value by explaining how to use each parameter with concrete examples ('AAPL', 'ozempic'), enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 entities side by side, specifying data retrieved for 'company' and 'drug' types. It distinguishes from sibling tools by focusing on comparison, not individual fact retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use (for side-by-side comparison, replacing multiple sequential calls) but does not explicitly mention when not to use or provide alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
date_factCRead-onlyInspect
Get historical facts about a calendar date (e.g., "3/14" for March 14). Returns the fact and date.
| Name | Required | Description | Default |
|---|---|---|---|
| day | Yes | Day number (1–31) | |
| month | Yes | Month number (1–12) |
Output Schema
| Name | Required | Description |
|---|---|---|
| date | Yes | The calendar date in M/D format |
| fact | Yes | Historical fact text about the date |
| type | Yes | Type of fact (date, etc.) |
| found | Yes | Whether a fact was found for the date |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It only states what the tool does ('Get an interesting fact') without mentioning any behavioral traits like rate limits, error handling, or what happens with invalid dates (e.g., February 30). This leaves significant gaps in understanding how the tool behaves beyond its basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is front-loaded and appropriately sized for a simple tool, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no annotations), the description is minimal but adequate for basic understanding. However, it lacks completeness in areas like behavioral context (e.g., error cases, fact sources) and usage guidelines, which could help an agent use it more effectively in varied scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with clear descriptions for both parameters ('day' and 'month'), so the schema does the heavy lifting. The description adds no additional meaning beyond implying that these parameters specify the date for the fact, which aligns with the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('interesting fact about a specific calendar date'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'math_fact' or 'number_fact' beyond the calendar date focus, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'random_fact' or other sibling tools. It lacks context about scenarios where this tool is preferred, such as for date-specific queries rather than general random facts, leaving the agent with minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsARead-onlyInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a search operation that returns relevant tools with names and descriptions, and it should be called first in specific contexts. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve distinct purposes: the first explains what the tool does, the second provides critical usage guidance. There's no wasted language, and the most important information (the search functionality) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and lack of annotations/output schema, the description provides good context about purpose and usage. However, it doesn't describe what format the results come in or any behavioral constraints, leaving some gaps for a tool that returns search results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond what the schema provides, mentioning only that you search 'by describing what you need' which aligns with the query parameter description. No additional parameter semantics are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), distinguishing it from sibling tools that provide facts rather than search capabilities. It explicitly identifies what the tool does beyond just restating the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific condition (500+ tools) and a clear alternative scenario (when you need to find tools). It directly addresses when this tool should be prioritized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileARead-onlyInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully explain behavior. It details the data returned and the output format (pipeworx:// URIs). It does not mention rate limits, authentication, or error handling, but the description is sufficiently transparent for an agent to understand the tool's read-only nature and scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, front-loading the main purpose and then listing content. Every sentence adds value, with no redundancy. It efficiently communicates the tool's power and limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregating 15–30 types of data), the description covers inputs and output format well. However, without an output schema, an agent might benefit from a brief example of the response structure or mention of how to use the citation URIs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds value by reinforcing accepted inputs (ticker or CIK) and the requirement to use resolve_entity for names. This extra guidance helps the agent avoid errors, going beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a full profile of an entity across Pipeworx packs, listing specific data sources (SEC filings, XBRL, USPTO, USAspending, GDELT, LEI). It distinguishes from siblings like compare_entities or resolve_entity by being an aggregate tool, replacing 15–30 sequential calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises to use resolve_entity for name resolution, indicating when not to use this tool. It also notes that only 'company' is supported, hinting at limitations. However, it does not exhaustively list all alternatives or when to choose other siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCDestructiveInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool deletes a memory, implying a destructive mutation, but doesn't cover critical aspects like whether deletion is permanent, requires specific permissions, has side effects, or what happens on success/failure. This is a significant gap for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It's front-loaded with the core action and resource, making it highly concise and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., permanence, error handling), usage context relative to siblings, and expected outcomes, leaving gaps that could hinder effective agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond this, such as key format or examples. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'recall' or 'remember', which likely involve memory retrieval or storage, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., that a memory must exist to be deleted), exclusions, or refer to sibling tools like 'recall' for retrieval or 'remember' for storage, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
math_factBRead-onlyInspect
Get mathematical properties about a number (e.g., 16, 256)—prime factors, sequences, mathematical significance. Returns math-focused trivia.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | The number to get a mathematical fact about (e.g., 1729) |
Output Schema
| Name | Required | Description |
|---|---|---|
| fact | Yes | Mathematical property or trivia about the number |
| type | Yes | Type of fact (math, etc.) |
| found | Yes | Whether a mathematical fact was found |
| number | Yes | The number the mathematical fact is about |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Get[s] a mathematical fact' but does not describe any behavioral traits such as rate limits, error handling, or what constitutes a 'mathematical fact' (e.g., trivia, properties). This leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that is front-loaded and wastes no words. It efficiently conveys the core purpose without unnecessary details, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema, no annotations), the description is somewhat complete but lacks depth. It covers the basic purpose but does not address behavioral aspects or usage context, which are important for an agent to use it effectively. This results in an adequate but minimal description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds minimal meaning beyond the input schema, which has 100% coverage and fully documents the 'number' parameter. The description mentions 'a specific number' and provides an example (1729), but this does not significantly enhance the schema's information. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('mathematical fact about a specific number'), making it easy to understand what the tool does. However, it does not explicitly distinguish this tool from its siblings (date_fact, number_fact, random_fact), which all seem to provide different types of facts, so it misses full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its siblings (date_fact, number_fact, random_fact). It implies usage for mathematical facts about numbers but does not specify alternatives, exclusions, or context for selection among similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
number_factCRead-onlyInspect
Get trivia facts about a specific number (e.g., 42, 1729). Returns the fact text and number.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | The number to get a fact about (e.g., 42) |
Output Schema
| Name | Required | Description |
|---|---|---|
| fact | Yes | Trivia fact text about the number |
| type | Yes | Type of fact (trivia, math, date, etc.) |
| found | Yes | Whether a fact was found for the number |
| number | Yes | The number the fact is about |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but only states what the tool does, not how it behaves. It doesn't mention whether this is a read-only operation, what kind of API it calls, potential rate limits, error conditions, or what format the fact will be returned in. The description adds minimal behavioral context beyond the basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise - a single sentence that clearly communicates the tool's purpose with zero wasted words. It's front-loaded with the essential information and earns its place efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description is insufficiently complete. It doesn't describe what format the fact will be returned in, potential limitations (e.g., number ranges supported), error conditions, or how this differs from sibling tools. The description leaves too many contextual questions unanswered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'number' clearly documented in the schema. The description doesn't add any additional parameter semantics beyond what's already in the schema, so it meets the baseline score of 3 for adequate coverage when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('interesting trivia fact about a specific number'), making it immediately understandable. However, it doesn't explicitly distinguish this tool from its sibling tools (date_fact, math_fact, random_fact), which all seem to provide different types of facts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus its sibling tools. While it's clear this tool provides number facts, there's no indication of when to choose number_fact over math_fact or random_fact, nor any mention of prerequisites or constraints for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses the rate limit and content restrictions (no end-user prompt verbatim). No annotations exist, so the description carries the full burden, and it adequately covers key behavioral constraints for a feedback tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the purpose and then cover usage details. Every word earns its place; no filler or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (3 parameters, nested objects, no output schema), the description covers all necessary context: input types, content guidelines, rate limits, and use cases. Complete for effective agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds meaningful context beyond the schema, particularly for the 'message' parameter: instructs to describe what was tried in Pipeworx terms and to avoid verbatim prompts. Schema coverage is 100%, so baseline is 3; the extra guidance justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool's action ('Send feedback') and resource ('Pipeworx team'), and lists specific use cases (bug reports, feature requests, missing data, praise). This clearly distinguishes it from sibling tools like 'ask_pipeworx' or 'discover_tools'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use guidance and explicit exclusion of end-user prompt verbatim, plus rate limit. However, it does not directly contrast with sibling tools, leaving some ambiguity for agents.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
random_factBRead-onlyInspect
Get a random trivia fact about a number. Returns the fact and the randomly chosen number. Use when you want surprising facts without specifying one.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns a trivia fact, but doesn't disclose behavioral traits such as rate limits, error handling, or what 'randomly chosen' entails (e.g., range, distribution). For a tool with zero annotation coverage, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Get a trivia fact') and specifies the scope ('about a randomly chosen number'). There is zero waste, and it's appropriately sized for a simple tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is complete enough to convey the basic purpose. However, it lacks details on output format (e.g., structure of the fact) and behavioral context, which could be important for an agent to use it effectively, though not critical for such a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add param info, but with no parameters, the baseline is 4 as it adequately addresses the lack of inputs by implying randomness without requiring user input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('trivia fact about a randomly chosen number'). It distinguishes itself from siblings like 'date_fact', 'math_fact', and 'number_fact' by specifying 'randomly chosen number' rather than requiring input, though it doesn't explicitly contrast with 'number_fact' which might also handle numbers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description implies usage for random number trivia but doesn't mention when to choose it over siblings like 'number_fact' (which might require a specific number) or other fact tools. There's no explicit context or exclusions stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallARead-onlyInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the core functionality (retrieve/list memories) and persistence across sessions, but lacks details on error handling (e.g., what happens if key doesn't exist), performance characteristics, or data format. It adequately covers the basic behavior but misses advanced context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: two sentences that directly address purpose and usage without redundancy. Every word earns its place, with no wasted text, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (memory retrieval with optional parameter), no annotations, and no output schema, the description is reasonably complete. It covers the core functionality, usage context, and parameter behavior, but lacks details on return values (e.g., memory content format) and error cases, which would be helpful for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful semantics beyond the 100% schema coverage. While the schema describes 'key' as 'Memory key to retrieve (omit to list all keys)', the description contextualizes this as 'previously stored memory by key' and emphasizes the conditional logic ('omit to list all'), reinforcing the dual-purpose nature. However, it doesn't specify key format or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'), distinguishing it from siblings like 'remember' (store) and 'forget' (delete). It explicitly mentions retrieving context saved earlier, which is distinct from fact-based siblings like 'date_fact' or 'math_fact'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It distinguishes when to use this tool (for retrieving saved context) versus alternatives like fact-based tools, and specifies the conditional behavior based on parameter presence (omit key to list all).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesARead-onlyInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes concurrent fan-out to multiple sources (SEC EDGAR, GDELT, USPTO) and explains that `since` accepts ISO or relative dates. Since no annotations exist, the description carries the burden well, though it omits potential limitations like rate limits or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, well-organized paragraph. Front-loaded with purpose, then details. Every sentence adds necessary information; no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all parameters, explains output format (structured changes + total_changes + URIs), and provides usage context. Lacks mention of error handling or empty results, but overall complete for a change-monitoring tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description's role is reduced. The description adds value by explaining relative date formats (e.g., '7d', '30d', '3m', '1y') and clarifying that `type` only supports 'company', going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'What's new about an entity since a given point in time' and specifies the entity type (company) and outputs (structured changes, total_changes, URIs). It distinguishes from sibling tools by focusing on change monitoring.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use for "brief me on what happened with X" or change-monitoring workflows,' providing clear context. Lacks explicit when-not-to-use or alternatives, but the usage context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a write operation ('store'), specifies persistence behavior ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. It does not cover aspects like error handling or rate limits, but for a simple storage tool, this is reasonably comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and behavioral details without any wasted words. Every sentence adds value, such as clarifying persistence rules, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (simple key-value storage), no annotations, no output schema, and 100% schema coverage, the description is mostly complete. It covers purpose, usage, and key behavioral traits like persistence. However, it lacks details on return values or error cases, which could be useful for an agent, but for this context, it is sufficiently comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the 'key' and 'value' parameters. The description does not add any meaning beyond what the schema provides (e.g., no extra syntax, format, or usage details for parameters). According to the rules, with high schema coverage, the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('store') and resource ('key-value pair in your session memory'), and distinguishes it from siblings like 'recall' (retrieval) and 'forget' (deletion). It explicitly mentions what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), which implicitly distinguishes it from siblings like 'recall' (for retrieval) and 'forget' (for deletion). However, it does not explicitly state when not to use it or name alternatives, such as suggesting 'recall' for reading stored data, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityARead-onlyInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses accepted inputs, return fields (ticker, CIK, name, URIs), and version scope (v1 only company). It implies read-only behavior, which is appropriate for a lookup tool. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The core purpose is front-loaded. The second sentence concisely adds version info, accepted formats, return values, and a value proposition. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately explains return values. It covers input variations and provides context for when to use (single call replacing multiple). For a simple tool with 2 parameters, this is complete enough for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by providing explicit examples for the value parameter (AAPL, CIK, 'Apple') and clarifying the type parameter's current limitation ('v1: type="company"'), which goes beyond the schema's enum description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves an entity to canonical IDs across Pipeworx data sources. It specifies the verb 'resolve' and the resource 'entity canonical IDs'. It distinguishes itself from sibling tools like fact lookups or ask_pipeworx by focusing on entity resolution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says the tool operates 'in a single call' and 'replaces 2–3 lookup calls', implying it is the efficient choice. It provides input format examples (ticker, CIK, name). It does not explicitly say when not to use, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimARead-onlyInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the return structure (verdict, extracted form, actual value with citation, delta) and the internal process (NL parsing, entity resolution, data lookup, comparison). It does not explicitly state non-destructive behavior, but the tool's purpose implies read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and well-structured. It front-loads the purpose, then scope, then return value, and ends with a value proposition. Every sentence is informative without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter and no output schema, the description adequately covers the tool's purpose, input, and output. It lacks details on error handling or unsupported claim types but is sufficient for an agent to decide and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single 'claim' parameter, including type and example. The tool description adds context about supported claim types but does not significantly expand on parameter semantics beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Fact-check a natural-language claim against authoritative sources.' It specifies the supported domain (company-financial claims via SEC EDGAR + XBRL) and distinguishes itself from sibling tools by condensing multiple agent calls into a single invocation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it replaces 4-6 sequential agent calls, suggesting it's a composite tool for efficient fact-checking. It also scopes to company-financial claims, providing a clear context. However, it lacks explicit when-not-to-use or alternative tool references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!