Conspiracy Theory
Server Details
conspiracy-theory MCP — wraps StupidAPIs (requires X-API-Key)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-conspiracy-theory
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.6/5.
Tools are mostly distinct, but ask_pipeworx could be confused with entity_profile for general queries. Memory tools and data tools are clearly separated.
Names are all lowercase snake_case, but not consistently verb_noun; forget, recall, remember are single verbs, while others follow verb_object pattern. Still readable.
9 tools is a well-scoped number for a data retrieval and analysis server, covering query, comparison, profiling, memory, and feedback.
The tool surface covers core operations: query, compare, profile, resolve, memory CRUD, feedback, and tool discovery. Minor gap: no explicit tool for listing all data sources directly, but discover_tools fills that need.
Available Tools
12 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that Pipeworx 'picks the right tool, fills the arguments, and returns the result' which explains delegation behavior. However, it does not disclose potential latency, error handling, or that answers depend on underlying tool availability.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three sentences plus examples, all relevant. Front-loaded with 'Ask a question in plain English' as the key action. Examples at the end are helpful. Could be slightly more concise by removing 'No need to browse tools or learn schemas' as it's implied.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one required parameter and no output schema, the description is quite complete. It explains the tool's behavior (auto-selection, argument filling) and gives concrete examples. Missing details about potential failure modes or limitations, but not critical for such a simple interface.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language'. The description adds value by framing it as 'plain English' and providing examples, but the schema already conveys the core meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it accepts plain English questions and returns answers by automatically selecting and invoking the best tool. Examples illustrate scope (e.g., trade deficit, adverse events, SEC filings). Differentiates from siblings like 'discover_tools' or 'recall' which are more specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells users to 'just describe what you need' and provides three examples showing natural language use. Does not explicitly state when not to use it, but the context signals (single required parameter) and examples imply broad applicability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses what data is returned for each type and mentions pipeworx:// URIs. It does not cover permissions, rate limits, or potential errors, but the disclosed behavior is specific and sufficient for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, uses a structured format for type-specific details. Every sentence is informative and concise, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema), the description covers the main functionality, data returned, and efficiency benefit. It lacks details on error handling or return format structure, but is complete enough for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds value by explaining the specific data fields for each type (company vs. drug) and clarifying the format of values (tickers/CIKs for company, names for drug), going beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 entities side by side, with specific data fields for each type (company: revenue, net income, etc.; drug: adverse events, approvals, trials). It also distinguishes itself by noting it replaces multiple sequential calls, which sets it apart from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when comparing multiple entities and mentions it replaces sequential calls, providing context for when to use. However, it does not explicitly state when not to use this tool or name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
conspiracy_theory_generateAInspect
Generate conspiracy theories by connecting unrelated concepts, investigating events with multiple theories, or escalating observations into narratives. Returns theory text with supporting connections. Modes: 'connect' (link two things), 'investigate' (explore event), 'escalate' (build narrative from observation).
| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | ||
| depth | No | ||
| event | No | Event to investigate (investigate mode) | |
| prompt | No | "I have noticed that..." (complete mode) | |
| thing_one | No | First thing to connect (connect mode) | |
| thing_two | No | Second thing to connect (connect mode) | |
| confidence | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| depth | No | Depth of investigation applied |
| theory | No | Generated conspiracy theory text |
| confidence | No | Confidence level of the theory |
| connections | No | Supporting connections or related concepts |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should clarify behavior. It mentions returning theory text with supporting connections, but does not disclose whether the tool is stateless or if it stores results. For a generation tool, this is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences front-loaded with purpose and modes. Straightforward and efficient, though the sentence about returns could be integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the main purpose and modes without explaining output structure beyond 'theory text with supporting connections'. Since there is an output schema, this may be acceptable. The mode mismatch reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning to modes beyond the schema, but it incorrectly names the third mode as 'escalate' while the schema uses 'complete'. This mismatch could confuse the AI into using an invalid mode. Schema coverage is 57%, and the description compensates somewhat but the error is severe.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates conspiracy theories and lists three specific modes (connect, investigate, escalate) with brief explanations. It distinguishes the tool from unrelated siblings like ask_pipeworx or entity_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use each mode: to link two things, explore an event, or build a narrative. However, it does not explicitly exclude cases or mention alternatives, but siblings are unrelated so that is acceptable. A point is deducted because the mode 'escalate' in the description does not match the schema's 'complete'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states the tool 'returns the most relevant tools with names and descriptions,' which sets expectations about the output format. Since no annotations are provided, the description carries the full burden. It doesn't mention rate limits, authentication, or side effects, but for a read-only search tool, the behavioral transparency is high enough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences: first states purpose, second clarifies return value, third gives usage timing. Every sentence adds value, no fluff. It is front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 params, no output schema, no nested objects), the description is sufficiently complete. It explains what the tool does, what it returns, and when to use it. The only minor gap is not specifying the exact output format (e.g., array of tool objects), but for a search tool, the description is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning both parameters are described in the input schema. The description does not add additional meaning beyond the schema for the parameters, so baseline 3 is appropriate. It reinforces the query parameter usage with examples but does not introduce new info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb (search), resource (tool catalog), and method (natural language description). It also distinguishes itself from siblings by noting this is for finding tools among 500+ options, which none of the siblings do.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear when-to-use guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This explicitly tells the agent to use this tool as a first step for tool discovery, distinguishing it from siblings like ask_pipeworx (which likely answers questions) or forget/recall/remember (which manage memory).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations given, so description carries the burden. It discloses the types of data returned and that it outputs pipeworx:// URIs, and mentions it replaces many sequential calls. However, it doesn't address error conditions, authorization needs, or rate limits, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four concise sentences with no redundancy. The main purpose is front-loaded, followed by specific details and usage guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return format (citation URIs) and lists the data categories. It mentions efficiency gains. However, it could be more complete on what happens if the entity is not found or if the call fails, but overall sufficient for a profile tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%. The description adds valuable context: type is only 'company' currently but expanding; value expects ticker or CIK, not names, and directs to resolve_entity for name resolution. This goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a full profile of an entity across Pipeworx packs, specifying exactly what data is returned for company type (SEC filings, XBRL data, patents, news, LEI). It distinguishes itself from sibling tools like resolve_entity and compare_entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when not to use: for federal contracts, call usa_recipient_profile directly. Also implies that if you have only a name, use resolve_entity first. This provides clear guidance on alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States that deletion is by key, but does not mention whether the operation is irreversible, requires confirmation, or has side effects. Acceptable for a simple delete operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no waste. Front-loaded with action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one required parameter and no output schema. Description is sufficient for the agent to understand usage, though lacks explicit mention of idempotency or error cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add meaning beyond the schema's parameter description ('Memory key to delete'). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'Delete' and the resource 'stored memory by key'. Distinguishes from sibling tools like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage: call when you want to delete a memory. No explicit guidance on when not to use or alternative tools for similar actions, but context from sibling names provides implicit differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must carry the full burden. It discloses rate limiting ('5 messages per identifier per day') and hints at appropriate content (excluding user prompts). However, it does not mention side effects, data handling, or response behavior, leaving some transparency gaps for a feedback tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, front-loading the purpose and key usage guidelines. Every sentence adds value, with no wasted words. It efficiently covers purpose, use cases, constraints, and rate limiting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple feedback tool with no output schema, the description is largely complete: it covers purpose, usage, parameter context, and rate limits. It does not explain what happens after submission (e.g., acknowledgment), but that is acceptable for a non-critical input tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema: it rephrases the enum examples and suggests typical message length. It does not clarify parameter relationships or nested constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action and resource: 'Send feedback to the Pipeworx team.' It enumerates specific use cases (bug reports, feature requests, etc.), making the purpose unambiguous and distinct from sibling tools like ask_pipeworx or discover_tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use (bug reports, etc.) and includes practical instructions like 'Describe what you tried in terms of Pipeworx tools/data' and rate-limit details. It does not explicitly state when not to use, but the sibling tools cover other purposes, so the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries the full burden. It discloses that omitting key lists all memories and that retrieval is for earlier saved context. However, it does not specify what happens if key does not exist, memory size limits, or whether retrieval is destructive. Adequate but could be richer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each with distinct value: first states function, second provides usage context. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with one param and no output schema, description covers the main functionality. Could mention return format or error behavior, but not critical for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter, and description adds context that omitting key lists all keys. This goes beyond schema, explaining the dual behavior. For a single optional param, this is good.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states the tool retrieves a memory by key or lists all memories, which is a specific verb-resource pair. It distinguishes itself from sibling tools like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly says when to use it ('retrieve context you saved earlier') and mentions the alternative behavior when key is omitted (list all). Does not explicitly mention when not to use it, but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the parallel fan-out to SEC EDGAR, GDELT, USPTO and mentions return format (structured changes, count, URIs). It does not cover potential latency, rate limits, or auth requirements, but these are less critical for a read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise and front-loaded with the main purpose. Each sentence adds useful information (fan-out, parameter formats, return type). It is slightly verbose but does not contain filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations or output schema, the description covers the tool's core behavior, parameters, and usage sufficiently. It explains the parallel data sources and return structure. It could be improved by noting any limitations or typical use cases more explicitly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides 100% coverage for all 3 parameters. The description adds value by giving examples for the 'since' parameter (ISO date and relative formats) and reiterating that 'type' only accepts 'company'. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving what's new about an entity since a given point in time. It specifies the verb ('what's new'), resource ('entity'), and scope ('since a given point'), and distinguishes from siblings like entity_profile by focusing on temporal changes. However, it does not explicitly contrast with all sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases ('brief me on what happened with X' or change-monitoring workflows') and details supported inputs (ISO dates, relative periods). It also clarifies that only 'company' type is supported. It lacks instructions on when not to use this tool or direct comparisons to alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full responsibility for behavioral disclosure. It mentions persistence differences for authenticated vs anonymous users, which is valuable context. However, it does not disclose potential limitations like key overwrites, size limits, or whether the memory is shared across sessions, which would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. It front-loads the purpose, then adds usage guidelines and behavioral context concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 required string params, no output schema), the description is complete enough. It covers purpose, usage, and key behavioral traits (persistence). One might wish for size limits or overwrite behavior, but these are minor for a straightforward memory store.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters well. The description adds minimal extra meaning beyond examples of keys and values; the schema's descriptions are sufficient. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, with a specific verb ('Store') and resource ('key-value pair in your session memory'). It distinguishes itself from sibling tools like 'forget' and 'recall' by describing the write operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it: to save intermediate findings, user preferences, or context across tool calls. It provides context for usage but does not explicitly exclude scenarios where alternatives (like 'recall' or 'forget') might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses input acceptance, output fields (ticker, CIK, name, URIs), and the nature of the call (single, canonical IDs). Lacks details on error handling or multiple matches but is adequate for a read-only lookup.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place: first states purpose and benefits, second details version, inputs, and outputs. No redundancy, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but the description adequately lists return fields. Given the tool's simplicity (two params, one enum), it covers essential behavioral and input aspects. Missing details on error cases or performance but sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds marginal value. However, it provides concrete examples (AAPL, 0000320193, Apple) that clarify the schema's description, elevating from baseline 3 to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool resolves an entity to canonical IDs across Pipeworx data sources, specifying the supported type (company) and input formats (ticker, CIK, name). It distinguishes itself from sibling tools like ask_pipeworx by focusing on ID resolution rather than general queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (single call replacing 2-3 lookups) and notes v1 only supports company, giving clear context. It does not explicitly list when not to use or alternatives, but the context is sufficient for typical use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full behavioral disclosure. It describes the return value (verdict, structured form, actual value with citation, percent delta) and notes it is v1 with limited scope. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One paragraph conveying all essential info efficiently. Could be marginally improved with bullet points, but it is already dense and clear. Every sentence contributes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description fully covers purpose, supported claims, output components, and efficiency benefits. An agent can correctly decide to invoke it and understand the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single 'claim' parameter. The description adds value by providing concrete examples ('Apple's FY2024 revenue...') and explaining the output context, going beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool fact-checks natural-language claims against authoritative sources, specifically company-financial claims. It details supported claim types (revenue, net income, cash for public US companies) and the verdicts returned. This clearly distinguishes it from sibling tools like ask_pipeworx or entity_profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description specifies the supported domain (company-financial claims, public US companies) and what it replaces (4-6 sequential agent calls), giving clear guidance on when to use. It lacks explicit when-not-to-use or alternatives, but the scope is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!