Tvmaze
Server Details
TVMaze MCP — TV show metadata, episodes, schedules (no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-tvmaze
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 16 of 16 tools scored. Lowest: 3.5/5.
The TV-specific tools (search_shows, get_show, list_episodes, get_schedule, search_people) are clearly distinct. The Pipeworx tools (ask_pipeworx, compare_entities, entity_profile, recent_changes, validate_claim, resolve_entity) have overlapping purposes (all deal with business data), but descriptions differentiate them sufficiently. However, the abundance of tools may still cause misselection.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., search_shows, get_show, compare_entities, validate_claim). There is no mixing of conventions, making the naming predictable and clear.
With 16 tools, the server is slightly above the typical 3-15 range. However, the combination of two distinct domains (TV shows and business data) partially justifies the count. The tools are well-scoped, but the overall set feels a bit heavy.
The TV domain is well-covered (search, get, episodes, schedule). The business domain offers many query tools but lacks direct data retrieval functions (e.g., raw SEC filing fetch). Reliance on ask_pipeworx for many tasks creates a potential dead end if the NL routing fails. Memory tools add value, but overall coverage is incomplete for advanced data operations.
Available Tools
16 toolsask_pipeworxAInspect
Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains the routing behavior across 300+ sources and that it picks the tool, fills arguments, and returns results. However, it does not discuss failure modes, authentication needs, or limitations of the routing logic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences plus a list of examples. It is front-loaded with the core purpose and usage guidance, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter and high schema coverage, the description is complete. It covers what the tool does, when to use it, and what kind of questions to ask. The lack of an output schema is not a deficiency here.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by providing examples of valid questions and reinforcing that the parameter accepts natural language, going beyond the schema's minimal description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers natural-language questions by automatically selecting the appropriate data source. It distinguishes itself from sibling tools by positioning itself as a meta-tool that routes queries to the correct underlying tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: when a user asks for lookups and the agent does not want to manually determine which Pipeworx tool to call. It provides multiple example queries, making usage intent very clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesAInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes data sources (SEC EDGAR, FAERS) and return type (paired data with citation URIs) without annotations. Indicates non-destructive reading behavior, though no explicit read-only guarantee.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph of about 5 sentences, each sentence adds value. Could be slightly more structured but remains efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, and behavior well, but lacks description of the return format (e.g., structure of 'paired data') since no output schema exists. Adequate but has a clear gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers parameters with 100% description coverage; the description adds contextual detail on what data each type retrieves (e.g., revenue, adverse events) beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'compare' and the resource '2-5 companies or drugs', clearly distinguishing from sibling tools like 'entity_profile' which likely handles single entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage cues like 'compare X and Y', 'X vs Y', and mentions it replaces multiple calls. Lacks explicit when-not-to-use but context from sibling names implies single-entity cases are handled elsewhere.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description states it returns top-N relevant tools with names + descriptions, and suggests calling it first. Lacks details on how relevance is determined, side effects, or auth needs, but covers the basic behavior adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single paragraph that front-loads purpose and usage. It includes examples of queries but is not overly verbose. Could be slightly more concise, but still well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a meta-tool that searches other tools, the description explains what it returns (names+descriptions) and when to use it. No output schema, but the tool's output is straightforward. Lacks error handling or interpretation guidance, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters already described. The description does not add new semantic value beyond what the schema provides (e.g., 'Natural language description' already in schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'find/discover' and the resource 'tools'. Distinguishes from siblings by specifying it returns the top-N most relevant tools for browsing/exploring the toolset, not for a specific task.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when you need to browse, search, look up, or discover what tools exist' and 'Call this FIRST when you have many tools available and want to see the option set'. Implies not for when you already know the specific tool, but could be more explicit about when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileAInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It details the return data (SEC filings, financials, patents, news, LEI) and mentions citation URIs. It does not explicitly state it is read-only, but given it's a profile retrieval, it's implied. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is somewhat long but front-loaded with the main purpose and usage triggers. It lists examples and return data efficiently. Could be slightly more concise but remains focused and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description compensates by fully listing the returned data categories and sources. Parameters are clearly explained with limitations. The tool is simple (2 params, no nested objects) and the description covers all necessary context for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing a baseline of 3. The description adds significant value by clarifying that only 'company' is supported, and that names must be resolved via resolve_entity. It also explains the value parameter can be ticker or CIK, reinforcing the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and resource 'everything about a company', clearly stating the tool's purpose. It provides example queries ('tell me about X', 'give me a profile of Acme') and distinguishes itself from siblings by mentioning it replaces calls to 10+ tools across various sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (user asks for company info) and when not to (if only a name is available, use resolve_entity first). It also names alternatives (resolve_entity) and provides context for its use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It clearly indicates a destructive action ('delete'), which is transparent. However, it does not mention if the operation is reversible or any side effects, but for a simple delete, the level of transparency is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. Each sentence adds value: action, usage context, and related tools. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers what it does, when to use it, and relationships with siblings. Complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the description adds no additional meaning beyond the schema's 'Memory key to delete'. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Delete a previously stored memory by key', specifying the verb 'delete' and the resource 'memory by key'. It distinguishes from siblings 'remember' and 'recall' which are for storage and retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when context is stale, the task is done, or you want to clear sensitive data'. Also provides guidance to pair with 'remember and recall', indicating related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scheduleAInspect
What TV shows are airing on a given day in a country. Useful for "what aired last night on UK TV". Returns episode + show records.
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | YYYY-MM-DD (default today) | |
| country | No | 2-letter country code (default "US") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description must carry the burden. It mentions 'returns episode + show records' but does not disclose safety (read-only implied), rate limits, or behavior when data is missing. The description is minimal for behavioral expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are remarkably efficient: the first states purpose, the second provides an example and return type. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description mentions return type ('episode + show records'). It is fairly complete for a simple schedule lookup but could include details on pagination, timezone handling, or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters (date format YYYY-MM-DD, country 2-letter code). The description adds no additional semantics beyond what is already in the schema, so baseline score 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'What TV shows are airing on a given day in a country' and gives a concrete example 'what aired last night on UK TV'. It distinctively differentiates from siblings like get_show or search_shows by focusing on schedule rather than individual show data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a useful example but does not explicitly state when to use this tool versus alternatives like search_shows or list_episodes. It lacks guidance on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_showAInspect
Fetch a full show record by TVMaze ID. Optionally embed episodes, cast, seasons, or previous/next episode info for richer responses.
| Name | Required | Description | Default |
|---|---|---|---|
| show_id | Yes | TVMaze show ID | |
| embed_cast | No | Include cast | |
| embed_seasons | No | Include season metadata | |
| embed_episodes | No | Include full episode list |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present, so description carries full burden. It only states 'Fetch' (read operation) with no mention of side effects, authentication, or rate limits. Minimal transparency beyond basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences. First sentence states primary function, second adds optional parameters. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should clarify return value. 'Full show record' is vague. Does not explain what fields are returned or how embeds affect response. Adequate but not complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions for all params. Description adds grouping of embed params and mentions 'previous/next episode info' which is not in schema, providing extra context but potentially misleading. Baseline 3 due to high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (Fetch a full show record) and the identifier (by TVMaze ID). It distinguishes from sibling tools like list_episodes and search_shows by focusing on fetching a complete record with optional embeds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use optional embeds ('for richer responses') but does not explicitly state when not to use this tool or compare with alternatives like list_episodes for episode-only needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_episodesAInspect
List all episodes of a show. Returns season, number, name, airdate, runtime, and summary.
| Name | Required | Description | Default |
|---|---|---|---|
| show_id | Yes | TVMaze show ID | |
| specials | No | Include special episodes (default false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read operation ('List'), but without annotations (readOnlyHint, destructiveHint) the agent must infer safety. No disclosure of authorization needs, rate limits, or side effects. The description is adequate but relies on common understanding rather than explicit behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, each serving a clear purpose: stating the action/resource, then listing return fields. No superfluous text. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 parameters, no output schema), the description provides sufficient information about what the tool returns. Could be slightly improved by mentioning error handling or behavior for invalid show_id, but not necessary for basic usage. Almost complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for both parameters (show_id and specials), achieving 100% coverage. The description adds no additional meaning beyond the schema. Baseline score of 3 is appropriate since the description does not add value over what the schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List all episodes of a show') and specifies the returned fields ('season, number, name, airdate, runtime, and summary'). It distinguishes itself from sibling tools like 'get_show' (single show) and 'search_shows' (search), making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives or when not to use it. However, the purpose is clear enough that an agent can infer usage context (e.g., retrieve all episodes for a known show). Lacks explicit exclusions or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description adds behavioral context: rate limits, daily digest read by team, free usage, and feedback affecting roadmap. This goes beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise and front-loaded, starting with the core purpose. Every sentence adds value: usage scenarios, constraints, audience, and impact. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a feedback tool with no output schema, description covers types, required context, framing guidelines, rate limits, and roadmap influence. No obvious gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for each parameter. The description does not add additional parameter-specific semantics beyond the schema, but it reinforces overall usage context. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for giving feedback (bugs, features, data gaps, praise) to the Pipeworx team. It uses specific verbs like 'tell' and lists distinct types, differentiating it from sibling tools which are primarily data retrieval or entity operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use scenarios (wrong data, missing tool, praise) and what-not-to-do (don't paste end-user prompt). Also mentions rate limits (5 per identifier per day) and that it's free, giving clear boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses scoping per identifier (anonymous IP, BYO key hash, account ID) and pairs with remember/forget. Could mention return format or error handling, but otherwise good.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, no fluff, front-loaded with core purpose. Each sentence adds value (purpose, usage, scoping, pairing). Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with 1 optional param and no output schema. Description covers purpose, usage, scoping, and sibling relationships. Lacks output description, but otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear param description. The tool description adds minimal extra value for the parameter (only reinforces dual use). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs 'retrieve' and 'list', clearly states the resource (saved keys/values), and distinguishes from sibling tools 'remember' and 'forget'. The purpose is unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context: 'use to look up context the agent stored earlier' and omitting key lists all keys. However, no explicit when-not or alternatives beyond mentioning siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesAInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool fans out to three sources in parallel and returns structured changes with citation URIs. However, it does not cover potential latency, error handling, or data freshness implications, which would be valuable for an agent to know.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph but efficiently structured: it starts with the core purpose, provides usage cues, then technical details about sources and return format. No sentence is wasted, though bullet points could slightly improve readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes the return value ('structured changes + total_changes count + pipeworx:// citation URIs'). It covers input parameters, use cases, and data sources. More detail on the structure of 'structured changes' would enhance completeness, but it is sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers all three parameters with descriptions, and the description adds significant value by explaining the `since` parameter's accepted formats (ISO date or relative shorthand like '7d', '30d', '3m', '1y') and clarifying that `value` can be a ticker or CIK. The examples go beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: providing 'what's new with a company' over a time window, and gives specific example queries that distinguish it from sibling tools like `entity_profile` or `compare_entities`. It names the data sources (SEC EDGAR, GDELT, USPTO) that it fans out to, making its functionality precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists when to use the tool via example user queries ('what's happening with X?', 'any updates on Y?', etc.) and mentions typical monitoring windows ('30d' or '1m'). It does not explicitly state when not to use it, but the examples provide sufficient context for appropriate invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, but description fully discloses persistence scope (by identifier), duration (persistent vs 24-hour), and that it's a write operation. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient 4-sentence structure: purpose first, then usage, behavior, and sibling pairings. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-param tool with no output schema, the description covers purpose, usage, persistence details, and relationship to siblings. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with good descriptions. Description adds examples but does not significantly extend meaning beyond schema. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool saves data for reuse, with specific verb 'Save data' and resource 'memory'. It distinguishes from siblings recall and forget, making its purpose unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when discovering worth-carrying information like tickers, addresses, preferences. Implicitly says when not to, and pairs with recall and forget for complete workflow guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityAInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It clearly discloses that the tool returns IDs plus pipeworx:// citation URIs, providing examples of what gets returned. It also states the tool's behavior (lookup based on type and value).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences and each sentence adds value. It is front-loaded with the core purpose and usage guidance. No redundant or wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return values (IDs plus citation URIs) and identifies the ID systems. It covers input requirements, purpose, and usage context, making it complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters. The description adds value by giving examples of valid inputs for both type and value (e.g., 'ticker (AAPL), CIK (0000320193), or name'), which enhances understanding beyond the enum and type descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it looks up canonical identifiers for companies or drugs, with specific examples (Apple, Ozempic) and mentions the ID systems (CIK, ticker, RxCUI, LEI). It distinguishes from siblings by noting it replaces 2–3 lookup calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this BEFORE calling other tools that need official identifiers.' It provides context on when to use it and the ID systems needed. However, it does not explicitly state when not to use it or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_peopleAInspect
Search actors / people by name. Returns id, name, country, birthday, gender, image.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Person name |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return fields but lacks details on pagination, ordering, rate limits, or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that front-loads purpose and return fields. Every word earns its place; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with one parameter, description is adequate but lacks details on result limits, ordering, or edge cases. Minimal but functional.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description matches by stating 'by name'. No additional meaning added beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states tool searches actors/people by name and lists returned fields. Distinguishes from sibling 'search_shows' which searches shows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like entity_profile or resolve_entity. Implied usage but no exclusions or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_showsAInspect
Search TV shows by name. Returns shows with id, name, premiere date, network, genres, rating, status, and TVMaze URL.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Show name (full or partial) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It indicates a search operation (read-only) but lacks details on rate limits, pagination, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two front-loaded sentences with no wasted words. Every sentence serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains return values adequately. Could benefit from pagination or ordering info, but overall complete for a simple search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the one parameter. The description adds value by specifying output fields but does not significantly extend parameter meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches TV shows by name and lists the returned fields. It distinguishes itself from sibling tools like 'get_schedule' or 'search_people'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching shows by name but does not explicitly state when to use this tool versus alternatives or provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimAInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, description provides good behavioral context: returns verdict with five categories, extracted form, value with citation, and delta. Notes underlying sources (SEC EDGAR + XBRL) and that it replaces several sequential calls. Lacks details on error handling or latency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured: purpose, usage, domain, return value in logical order. Each sentence adds useful information, no redundancy. Moderate length appropriate for a non-trivial tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, description covers purpose, usage, domain, and expected return format. Missing details on error scenarios or unsupported claim types, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Single 'claim' parameter has full schema description (100% coverage). Description adds value with concrete examples and states the parameter expects natural language, clarifying usage beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool fact-checks natural-language claims against authoritative sources, with specific domain (company-financial) and examples. Distinguishes from siblings as no other tool performs fact-checking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('check whether something a user said is true') with example queries. Identifies domain (company-financial) and notes version limitations. Does not explicitly state when not to use, but domain restriction implies exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!