dadjokes
Server Details
Dad Jokes MCP — wraps icanhazdadjoke.com (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-dadjokes
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 14 of 14 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but the inclusion of joke tools (get_joke, random_joke, search_jokes) alongside many data query tools creates confusion about the server's primary function. An agent might misinterpret the scope.
All tools follow a consistent verb_noun pattern with underscore separators (e.g., compare_entities, validate_claim, random_joke). No mixing of conventions.
14 tools is a reasonable number, but the server is named 'dadjokes' while 11 of 14 tools are for Pipeworx data queries, not jokes. This mismatch makes the count inappropriate for the advertised purpose.
For the dad jokes domain, only retrieval and search are covered; missing create, update, or delete. The Pipeworx side appears comprehensive, but the server's stated focus on jokes is severely incomplete.
Available Tools
14 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: the tool automatically selects data sources and fills arguments, and returns results. However, it doesn't mention limitations like rate limits, authentication needs, or what happens with ambiguous questions. The description adds value but lacks comprehensive behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: first sentence states core purpose, second explains the mechanism, third provides usage guidance, and fourth gives concrete examples. Every sentence earns its place with no redundant information. It's appropriately sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language interface with automatic tool selection) and lack of output schema, the description does well by explaining the mechanism and providing examples. However, without annotations or output schema, it could benefit from more detail about result formats or error conditions. It's mostly complete but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'question' parameter. The description adds meaningful context by specifying 'in plain English' and 'natural language', and provides concrete examples that illustrate appropriate parameter values beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'). It distinguishes from siblings by emphasizing natural language interface vs. structured tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'No need to browse tools or learn schemas — just describe what you need.' It contrasts this tool with alternatives (like discover_tools) by stating when to use it (natural language queries) vs. when not (tool browsing). The examples further clarify appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It mentions return format (paired data + URIs) and batch operation, but lacks details on safety, auth requirements, or rate limits. Some transparency is present but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences front-load the main action and include essential details (entity types, data fields, efficiency claim) without redundancy or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a comparison tool with no output schema, the description adequately covers input constraints, type-specific data, and efficiency gains. Minor gaps like data freshness or error handling are acceptable for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already covers both parameters with descriptions (100% coverage). The description adds value by explaining type-specific returned fields and showing usage examples, providing context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 entities side by side, with specific data fields for company and drug types, and highlights efficiency gains over sequential calls. This distinguishes it from siblings like resolve_entity which handles single entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool (comparing multiple entities) and notes efficiency benefits, but does not explicitly exclude cases or mention alternatives such as resolve_entity for single entity lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it's a search operation that returns relevant tools with names and descriptions, and it's intended for initial discovery. However, it lacks details on rate limits, error handling, or response format specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage guidelines, all in two efficient sentences with zero waste. Every sentence adds clear value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is mostly complete: it covers purpose, usage, and behavioral intent. However, it could benefit from more details on output structure or error cases to be fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (query and limit). The description adds context by explaining the query's purpose ('describing what you need') and the tool's role in discovery, but doesn't provide additional semantic details beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and distinguishes it from sibling tools by emphasizing its role in discovering tools among 500+ options, unlike the joke-related siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific condition (500+ tools) and alternative context (vs. not using it for smaller tool sets).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden. It discloses that it returns citation URIs, is efficient (replaces multiple calls), and notes a limitation (federal contracts too slow to bundle). It doesn't explicitly state read-only behavior, but that is implied. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, well-structured, and front-loaded. First sentence states the core function, then bullet-style listing of included data, then efficiency and alternative guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (aggregating multiple data sources) and no output schema, the description is quite complete: it lists the data types returned and mentions citation URIs. It also notes the alternative for federal contracts. Could be improved by briefly describing the output format (e.g., JSON structure) but sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds valuable context: only 'company' supported, names not supported (use resolve_entity first), and value can be ticker or CIK. This goes beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that it returns a full profile of an entity, listing specific data types (SEC filings, revenue, patents, news, LEI). It distinguishes itself from siblings like resolve_entity and mentions an alternative tool (usa_recipient_profile) for federal contracts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use (for comprehensive entity profile) and when not to use (for federal contracts, use usa_recipient_profile directly). It also explains that it replaces 10-15 sequential calls, giving efficiency context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is a deletion operation, implying it's destructive, but doesn't disclose critical behavioral traits like whether deletion is permanent, reversible, requires specific permissions, or has side effects (e.g., affecting other tools). For a destructive tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with zero waste—it states the action, resource, and parameter context efficiently. It's front-loaded and appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after deletion (e.g., confirmation, error handling), return values, or implications for other tools. For a mutation operation, more context is needed to guide safe usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'key' fully documented in the schema as 'Memory key to delete'. The description adds no additional meaning beyond this, such as key format, examples, or constraints. Baseline 3 is appropriate when the schema does all the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('a stored memory by key'), making the purpose immediately understandable. However, it doesn't differentiate this tool from potential siblings like 'recall' or 'remember' that might also manipulate memories, leaving room for ambiguity in a memory management context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'recall' (likely for retrieving memories) and 'remember' (likely for storing memories), there's no indication of prerequisites, constraints, or when deletion is appropriate versus other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_jokeCInspect
Retrieve a specific dad joke by ID. Returns the full joke text.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | The ID of the dad joke to retrieve. |
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | Unique identifier for the joke |
| joke | Yes | The joke text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool retrieves a joke by ID, implying a read-only operation, but doesn't disclose behavioral traits such as error handling (e.g., what happens if the ID is invalid), authentication needs, rate limits, or response format. This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without any wasted words. It is appropriately sized and front-loaded, making it easy to understand at a glance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., joke text, metadata), error conditions, or other contextual details needed for effective use. For a tool with no structured data support, this minimal description falls short.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'id' fully documented in the schema as 'The ID of the dad joke to retrieve.' The description adds no additional meaning beyond this, such as ID format or examples. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('a specific dad joke by its ID'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'random_joke' (which presumably fetches random jokes) or 'search_jokes' (which likely searches by criteria).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'random_joke' or 'search_jokes'. It mentions retrieving by ID, which implies usage when the ID is known, but offers no explicit when/when-not instructions or sibling comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limiting and 'Free', but lacks details on success/failure behavior or side effects. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: two sentences plus rate limit info. Front-loaded with purpose, then key guidelines. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 100% schema coverage and no output schema, description provides essential usage context and rate limit. Could mention expected outcome (e.g., 'feedback received'), but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in schema. Description adds minor value by reinforcing content guidelines for 'message' and usage for 'type'. Baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Definition clearly states verb 'Send feedback' and resource 'Pipeworx team', with specific use cases (bug reports, feature requests, missing data, praise). Distinct from sibling tools like ask_pipeworx or discover_tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenarios and content guidelines (describe Pipeworx tools/data, omit user prompts). Mentions rate limit of 5 messages per day per identifier. Does not explicitly state when not to use, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
random_jokeBInspect
Get a random dad joke. Returns joke text and ID.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| id | Yes | Unique identifier for the joke |
| joke | Yes | The joke text |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only states what the tool does, not how it behaves. It doesn't disclose whether this is a read-only operation, if it has rate limits, what format the joke returns in, or any error conditions. 'Get' implies reading, but no behavioral details are given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise - a single sentence that states exactly what the tool does with zero wasted words. It's front-loaded with the essential information and earns its place completely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple zero-parameter tool with no output schema, the description covers the basic purpose adequately. However, without annotations and with sibling tools present, it lacks important context about when to use this versus alternatives and what to expect in return, making it only minimally viable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the parameter situation. The description appropriately doesn't add parameter information, maintaining a baseline score of 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('a random dad joke'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from its siblings (get_joke, search_jokes), which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the sibling tools (get_joke, search_jokes). There's no mention of alternatives, prerequisites, or context for choosing this specific joke-retrieval method.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that memories can be retrieved from current or previous sessions, adding useful context. However, it doesn't cover behavioral traits like error handling (e.g., what happens if a key doesn't exist), performance aspects, or data persistence details, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality in the first sentence and uses a second sentence to provide context, with zero wasted words. Every sentence earns its place by adding clear value, making it efficient and well-structured for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is moderately complete. It covers the tool's purpose and basic usage but lacks details on return values, error cases, or how memories are stored/retrieved across sessions. For a tool with no structured output information, this leaves room for improvement in contextual coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds value by explaining the semantics of omitting the key to list all memories, which clarifies usage beyond the schema's technical details. This extra guidance justifies a score above the baseline, though it doesn't fully elaborate on parameter constraints or examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' It specifies the verb 'retrieve' and resource 'memory,' making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'remember' or 'forget,' which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It explains the optional parameter behavior (omit key to list all), which is helpful. However, it doesn't specify when not to use it or mention alternatives like 'search_jokes' for related tasks, so it falls short of a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses parallel fan-out to three sources, return structure (structured changes, count, URIs), and parameter formats. Missing info on auth or rate limits, but sufficient for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences front-loading purpose and usage, no wasted words. Essential details are packed efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given tool complexity (multiple sources, no output schema), description covers purpose, parameters, return fields, and limiting factor (only company). Could mention pagination or max results, but adequate for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3. Description adds value by explaining relative time formats for 'since', clarifying 'type' is limited to 'company', and giving examples for 'value' (ticker/CIK).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves changes for an entity since a time point, specifying fan-out to SEC EDGAR, GDELT, and USPTO for companies. It differentiates from siblings like entity_profile by targeting change monitoring and briefing workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases ('brief me on what happened with X', 'change-monitoring workflows') and explains parameter formats. Does not specify when not to use or alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a storage operation (implies mutation), specifies persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. However, it doesn't cover potential errors, limits on key/value size, or concurrency behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two concise sentences that each earn their place. The first sentence states the core purpose, and the second adds crucial behavioral context about persistence without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (storage with persistence rules), no annotations, no output schema, and 100% schema coverage, the description is mostly complete. It covers purpose, usage context, and key behavioral traits, but lacks details on return values, error conditions, or storage limits, which would be helpful for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or usage patterns. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Store') and resource ('key-value pair in your session memory'), and distinguishes it from sibling tools like 'recall' (which likely retrieves) and 'forget' (which likely deletes). It specifies what types of data can be stored ('intermediate findings, user preferences, or context across tool calls').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('save intermediate findings, user preferences, or context across tool calls'), but does not explicitly state when not to use it or name alternatives. It implies usage for persistence across tool calls but lacks explicit exclusions or comparisons to sibling tools like 'recall'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that it is a single call, version 1 supports only company, accepted input formats, and return fields including URIs. It does not cover error handling or unresolved inputs but provides sufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences are concise and front-loaded with purpose. No unnecessary words, but could be slightly more structured with bullet points for inputs/outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema), the description explains what it does, what it accepts, and what it returns. It is complete enough for an agent to invoke correctly. Missing edge case handling but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by providing concrete examples (AAPL, CIK, 'Apple') and clarifying that value can be ticker, CIK, or name, which is slightly beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Resolve' and resource 'entity to canonical IDs'. It distinguishes from siblings like 'ask_pipeworx' by specifying it does entity resolution across data sources and replaces multiple lookup calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need canonical IDs for a company ('Replaces 2–3 lookup calls') but does not explicitly state when not to use or mention alternatives like 'ask_pipeworx'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_jokesBInspect
Search dad jokes by keyword or topic. Returns matching jokes with IDs and text.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of jokes to return. Defaults to 10. | |
| query | Yes | Term to search for within dad jokes. |
Output Schema
| Name | Required | Description |
|---|---|---|
| jokes | Yes | Array of matching jokes |
| query | Yes | The search term used |
| total | Yes | Total number of jokes matching the query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions searching but does not disclose behavioral traits such as whether results are paginated, rate limits, authentication needs, error conditions, or what happens if no matches are found. The description is minimal and lacks critical operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It is appropriately sized and front-loaded, with every word earning its place. No structural issues are present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It lacks information on behavioral aspects (e.g., search behavior, result format, errors) and does not compensate for the absence of structured data. For a search tool with two parameters, more context is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters ('query' and 'limit'). The description adds no additional meaning beyond what the schema provides, such as examples of search terms or how the search is performed. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search') and resource ('dad jokes') with a specific mechanism ('by a keyword or term'). It distinguishes from 'get_joke' (likely retrieves a specific joke) and 'random_joke' (retrieves random jokes), though the distinction could be more explicit. The purpose is not vague or tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching jokes with a keyword, but does not explicitly state when to use this tool versus alternatives like 'get_joke' or 'random_joke'. No guidance on exclusions or prerequisites is provided. Usage is contextually implied rather than explicitly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses the tool's behavior: it uses SEC EDGAR + XBRL sources, returns specific verdicts (five types), includes structured extraction, actual values with citations, and percent delta. It also explains that it replaces multiple agent calls, giving insight into its efficiency. No contradictory or missing critical behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, using three sentences to convey purpose, supported domain, return values, and value proposition. It front-loads the key verb and resource, then efficiently adds details. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a fact-checking tool with one input and multiple outputs, the description covers all essential aspects: input type (natural-language claim), supported domain (company-financial), data sources (SEC EDGAR + XBRL), return items (verdict, structured form, actual value, citation, delta), and comparative advantage (replaces 4-6 calls). Since there is no output schema, the textual description adequately substitutes for it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a clear description for the single 'claim' parameter, including examples. The description adds value by providing example claims (e.g., 'Apple's FY2024 revenue was $400 billion') and specifying that it supports natural-language claims, which goes beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-check natural-language claims against authoritative sources. It specifies the domain (company-financial claims for public US companies), the specific data points (revenue, net income, cash), and the return items (verdict, structured form, actual value, citation, percent delta). This distinguishes it from sibling tools like 'ask_pipeworx' or 'compare_entities'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states that v1 supports company-financial claims for public US companies via SEC EDGAR + XBRL, giving clear usage context. It mentions that it replaces 4-6 sequential agent calls, implying it's a consolidating tool for this specific task. However, it does not explicitly state when not to use it or name alternative tools for other types of claims.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!