Odds Api
Server Details
The Odds API MCP — sportsbook odds across 70+ books, 30+ leagues
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-odds-api
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 16 of 16 tools scored. Lowest: 2.7/5.
Most tools have distinct purposes: odds-specific tools, memory tools, and company/drug tools are clearly separated. However, 'ask_pipeworx' is a general router that overlaps with many specialized tools, causing potential ambiguity.
Most tools follow verb_noun pattern (e.g., compare_entities, get_odds), but there are exceptions like 'entity_profile' (noun_noun) and single-word verbs (forget, recall). The mix reduces predictability.
16 tools is slightly high for an odds-focused server, but many tools serve broader data query needs. The count is acceptable given the expanded scope.
The tool set covers the odds domain fully (list sports, events, odds, scores) and provides comprehensive support for company/drug queries, memory, and feedback. No obvious gaps.
Available Tools
16 toolsask_pipeworxARead-onlyInspect
PREFER OVER WEB SEARCH for questions about current or historical data: SEC filings, FDA drug data, FRED/BLS economic statistics, government records, USPTO patents, ATTOM real estate, weather, clinical trials, news, stocks, crypto, sports, academic papers, or anything requiring authoritative structured data with citations. Routes the question to the right one of 1,423+ tools across 392+ verified sources, fills arguments, returns the structured answer with stable pipeworx:// citation URIs. Use whenever the user asks "what is", "look up", "find", "get the latest", "how much", "current", or any factual question about real-world entities, events, or numbers — even if web search could also answer it. Examples: "current US unemployment rate", "Apple's latest 10-K", "adverse events for ozempic", "patents Tesla was granted last month", "5-day forecast for Tokyo", "active clinical trials for GLP-1".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explains routing across sources, picking tools, and returning results, but lacks details on error handling, authentication, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose and examples. Each sentence is informative, though slightly verbose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema or annotations, the description is thorough, covering usage, behavior, and examples. Lacks output format details but sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage for the single 'question' parameter. Description adds value with example questions and context, reinforcing the parameter's usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Answer' and the resource 'natural-language question', and distinguishes itself from siblings by emphasizing automatic data source selection without needing to pick a specific tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use when a user asks... and you don't want to figure out which Pipeworx pack/tool to call.' Provides example questions, but does not explicitly state when not to use or mention alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_entitiesARead-onlyInspect
Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| values | Yes | For company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully carries the burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS for drugs), the specific fields pulled (revenue, net income, etc.), and that it returns paired data with citation URIs. It does not mention authorization or rate limits but is otherwise transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured: starts with the core purpose, then lists use cases, then details each type. Every sentence contributes meaning, and the length is appropriate for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains return values (paired data + citation URIs) and data sources, which is sufficient given no output schema. It does not cover error handling or limits, but for a comparison tool this is minor. Overall very complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds significant value by explaining what each type pulls (company: financials, drug: adverse events, approvals, trials) and providing concrete examples like 'AAPL' and 'ozempic'. This goes well beyond the schema's basic type descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool compares 2-5 companies or drugs side by side, with specific examples of use cases like 'compare X and Y' and 'X vs Y'. It distinguishes from sibling tools by noting it replaces 8-15 sequential agent calls, implying a batch comparison capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists trigger phrases such as 'compare X and Y' and 'X vs Y', and indicates when to use it for tables/rankings. It does not explicitly name alternative tools for single-entity queries, but the context signals and sibling list suggest entity_profile for individual lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsARead-onlyInspect
Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description adequately covers that it returns top-N relevant tools with names and descriptions. Does not mention destructive actions (implied read-only). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is well-structured with key information front-loaded. Each sentence adds value, though slightly verbose with many examples. Still clear and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers purpose, input format, output summary, and when to use. Sufficient for an agent to decide invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds context with example queries and explains the limit parameter's default and max, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool finds tools by describing data or task, and gives numerous examples. Differentiates from siblings as a meta-tool for discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST when you have many tools available' and provides specific use cases with domain examples. Implicitly advises against use when already knowing the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_profileARead-onlyInspect
Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today; person/place coming soon. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: returns recent SEC filings, financials, patents, news, and LEI with pipeworx:// citation URIs. It also notes the type limitation (only 'company' supported). No destructive actions are implied.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that front-loads purpose and usage. Every sentence adds value, though it is slightly long. It efficiently covers use cases, data sources, and parameter details without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return content and citation format. It covers input constraints, supported data sources, and usage context. For a complex aggregation tool, this is highly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, but the description adds significant value: clarifies the enum for 'type' (only 'company'), explains 'value' accepts ticker or CIK, and warns that names are not supported, directing to resolve_entity. This goes beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get everything about a company in one call.' It lists specific data sources (SEC filings, financials, patents, news, LEI) and provides concrete example queries ('tell me about X', 'research Microsoft'), making it distinct from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use the tool (for comprehensive company profiles) and provides an alternative: 'use resolve_entity first if you only have a name.' It implies not to use if you only need a single data source, but doesn't list all sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetADestructiveInspect
Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses destructive nature ('Delete'), but no additional details on permanence or side effects. Annotations absent, so description carries full burden; adequate for simple tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action and purpose, zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple single-param tool with no output schema; description completely covers purpose, usage, and pairing with siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter 'key' described in schema as 'Memory key to delete'; description adds no new semantics. Schema coverage 100% sets baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Delete' with resource 'previously stored memory by key'. Distinguishes from siblings 'remember' and 'recall'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (stale context, task done, clear sensitive data) and suggests pairing with remember and recall.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_event_oddsCRead-onlyInspect
Odds for a single event — allows richer markets (player props, alt lines). Higher quota cost per call.
| Name | Required | Description | Default |
|---|---|---|---|
| markets | No | Includes player_props_* markets for some leagues | |
| regions | No | ||
| event_id | Yes | ||
| sport_key | Yes | ||
| odds_format | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry behavioral disclosure. It mentions a higher quota cost, which is a behavioral trait, but does not address mutability, authentication, or whether it is read-only. This is middling transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is efficient and front-loaded, but it omits potentially useful details that could be added without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there are 5 parameters, no output schema, and no annotations, the description is too sparse. It fails to explain the return format, parameter usage, or error conditions, leaving the agent underinformed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 20%, and the description adds no meaning to the parameters beyond what the schema already provides. Only the 'markets' parameter has a schema-level description; others remain undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool retrieves odds for a single event and emphasizes richer markets like player props and alternate lines, distinguishing it from siblings such as 'get_odds' which presumably handles multiple events. However, it does not explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The note about 'higher quota cost per call' implies a trade-off but does not specify when to avoid it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_eventsCRead-onlyInspect
Upcoming + live events for a league (without odds — useful to discover event IDs).
| Name | Required | Description | Default |
|---|---|---|---|
| event_ids | No | ||
| sport_key | Yes | ||
| date_format | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It mentions 'upcoming + live' but does not specify if past events are included, pagination, or any limits. The date_format parameter is not explained. This lack of detail reduces transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise, but it lacks structure and detail that would be helpful. It front-loads the key purpose but omits necessary information, making it overly brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (3 params, no nested objects, no output schema), the description covers the basic purpose but is incomplete. It fails to explain parameter usage or behavioral details, leaving gaps for a first-time user.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and the description provides no additional meaning for the three parameters (event_ids, sport_key, date_format). It does not explain what values are expected or how they affect the output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns upcoming and live events without odds, and it is useful for discovering event IDs. It distinguishes from sibling tools like get_event_odds and get_odds which include odds. However, it does not specify the 'league' concept or how it relates to the sport_key parameter, leaving slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when odds are not needed and when event IDs are needed, but it does not explicitly compare with sibling tools or state when not to use this tool. Given siblings include get_event_odds and get_odds, more explicit guidance would be beneficial.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_oddsARead-onlyInspect
Current odds for upcoming + live events in a league. Each market type costs 1 quota credit per region per call — pick markets carefully.
| Name | Required | Description | Default |
|---|---|---|---|
| markets | No | h2h | spreads | totals | outrights — comma-sep (default h2h) | |
| regions | No | us | uk | eu | au — comma-sep (default us) | |
| event_ids | No | Restrict to specific event IDs (comma-sep) | |
| sport_key | Yes | Sport key from list_sports, e.g. "americanfootball_nfl" | |
| bookmakers | No | Comma-sep bookmaker keys (e.g. "draftkings,fanduel") to restrict | |
| date_format | No | iso (default) | unix | |
| odds_format | No | american (default) | decimal |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It discloses quota credit cost, a key behavioral trait. However, it does not mention read-only nature, authentication needs, or rate limits. The cost info adds value beyond schema but lacks completeness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states the tool's core function, second provides a critical behavioral note on costs. No waste, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 7 parameters and no output schema, the description covers purpose and cost but omits details like return format, pagination, error handling, or integration hints with sibling tools. Adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage, so baseline is 3. The description does not elaborate on any parameter beyond the schema's own descriptions; no additional semantic value added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it provides current odds for upcoming and live events in a league. However, it does not differentiate from sibling 'get_event_odds', which may cause confusion for an AI agent selecting between them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes a useful cost warning ('Each market type costs 1 quota credit per region per call — pick markets carefully.'), which guides economical use. But no explicit guidance on when to use this tool vs alternatives like get_event_odds or get_scores.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_scoresCRead-onlyInspect
Live + recent final scores. Costs 2 requests per call.
| Name | Required | Description | Default |
|---|---|---|---|
| days_from | No | 1-3 days from now (default 1) | |
| sport_key | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description must carry full burden. Only states 'live + recent final scores' and cost. Does not disclose data format, authentication needs, rate limits, or response structure. Minimal behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence plus cost note is concise and front-loads purpose. However, the brevity sacrifices some clarity and completeness. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description should cover return type, expected format, and typical use cases. It only mentions 'scores' and cost. The sibling tools (e.g., get_events) are not differentiated. The description is too sparse to fully support correct tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only days_from has description). The tool description adds no parameter-specific meaning beyond what the schema already offers. For the undocumented sport_key, no additional semantics are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'live + recent final scores', clearly indicating the tool retrieves scores. The verb 'get' aligns with the name. However, it does not specify the scope (e.g., by sport or time range) beyond what the schema implies, making it clear but not fully differentiating.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use get_scores compared to siblings like get_events or get_odds. The cost hint ('2 requests per call') is mentioned but not contextualized. Lacks explicit when-to-use or when-not-to-use advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sportsARead-onlyInspect
List available sports/leagues. By default returns only in-season; pass all=true to include out-of-season.
| Name | Required | Description | Default |
|---|---|---|---|
| all | No | Include out-of-season sports (default false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses in-season filtering by default and parameter to include out-of-season. Lacks details on return format or error conditions, which is adequate for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no unnecessary words, efficient communication of key behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main usage and parameter behavior; lacking explicit output description but sufficient given tool simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description reinforces the parameter's effect without adding new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists sports/leagues and specifies default filtering behavior, distinguishing it from other list tools like get_events.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explains default behavior and parameter usage, but does not explicitly mention when to use this tool versus siblings; however, the unique purpose is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pipeworx_feedbackAInspect
Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | bug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else. | |
| context | No | Optional structured context: which tool, pack, or vertical this relates to. | |
| message | Yes | Your feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses rate limits, that feedback is read daily, and that it affects roadmap. Minor gap: doesn't specify if feedback is public or private.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise yet comprehensive. Front-loaded with main purpose, each sentence adds value, no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple feedback tool, the description covers all needed context: input structure, usage rules, rate limits, and team handling. No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by elaborating on the 'type' enum meanings and providing guidelines for the 'message' parameter beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to report bugs, request features, note data gaps, or give praise. It distinguishes itself from sibling tools by focusing on feedback rather than data retrieval or operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance on when to use each feedback type (bug, feature, data_gap, praise) and what to avoid (pasting end-user prompts). Also notes rate limits and that the tool is free.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallARead-onlyInspect
Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility. It implies a read operation with 'retrieve' and mentions scoping, but lacks details about side effects, permissions, rate limits, or error conditions. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, front-loaded with the core action, followed by use-case context and pairing instructions. No redundant or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could have explained the return format or error behavior. However, it covers the tool's purpose, usage scenario, scoping, and relationship to siblings fairly well. Minor gap in not stating what happens if key not found.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter 'key' described). The description adds value by explaining that omitting the key lists all keys and provides concrete examples of what keys represent (e.g., ticker, address). This goes beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieve' and the resource ('value previously saved via remember' or list all keys). It distinguishes itself from siblings by naming 'remember' and 'forget' in the pairing instructions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool: 'to look up context the agent stored earlier... without re-deriving it from scratch.' It also indicates scoping by identifier. However, it doesn't explicitly state when NOT to use or list alternatives beyond mentioning remember/forget.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recent_changesARead-onlyInspect
What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type. Only "company" supported today. | |
| since | Yes | Window start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring. | |
| value | Yes | Ticker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses parallel fan-out to three sources (SEC EDGAR, GDELT, USPTO), output structure (structured changes + count + citation URIs), and input format for 'since'. No annotations provided, so description carries the burden; it does well but omits potential issues like rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and front-loaded with a question, followed by examples and technical details. Every sentence adds value, though slightly verbose in listing examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers input formats, output summary, data sources, and typical use cases. Could mention pagination or limits, but overall complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description reiterates some parameter details but adds little beyond the schema (e.g., examples of relative dates).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves recent changes for a company, with examples of user queries and data sources. It is specific but does not explicitly differentiate from sibling tools like 'entity_profile'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use examples and context (e.g., 'what's happening with X?'). Lacks when-not-to-use or explicit alternatives, but the usage guidance is strong.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Explains scoping by identifier, persistence differences (authenticated vs anonymous, 24-hour limit). Does not explicitly mention overwrite behavior, but adequately conveys basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise description front-loads purpose, then usage, storage details, and related tools. Every sentence is useful with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple set tool, covers all essential aspects: purpose, when to use, scoping, persistence, and related tools. No output schema needed for a side-effect operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both params. Description adds examples of keys ('subject_property', 'target_ticker') and values ('findings, addresses'), providing meaningful context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'save data' and resource 'key-value pair', clearly distinguishing from siblings like recall and forget by mentioning their complementary roles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when you discover something worth carrying forward') and implies not for transient data. Provides alternative tools (recall, forget) and context on persistence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_entityARead-onlyInspect
Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.
| Name | Required | Description | Default |
|---|---|---|---|
| type | Yes | Entity type: "company" or "drug". | |
| value | Yes | For company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavior. It mentions returning IDs and citation URIs, implying read-only lookup. However, it does not explicitly state it is a safe read-only operation or mention any rate limits or prerequisites beyond entity name.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is front-loaded with purpose, uses examples efficiently, and every sentence adds value. No fluff. Paragraph length is appropriate for the information conveyed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description adequately explains return values (IDs plus citation URIs). Covers usage context, parameter examples, and ordering. Complete for a simple lookup tool with two parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with enum for type and description for value. Description goes beyond schema by providing examples (Apple → AAPL, Ozempic → RxCUI) and clarifying acceptable value formats (ticker, CIK, name). Adds significant meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it looks up canonical/official identifiers for companies or drugs with specific ID systems (CIK, ticker, RxCUI, LEI). Examples are provided. Sibling differentiation is implied by stating it should be used before other tools, but explicit comparison to siblings like 'entity_profile' is missing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when a user mentions a name and you need the CIK...' and gives ordering: 'Use this BEFORE calling other tools that need official identifiers.' Context is clear and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validate_claimARead-onlyInspect
Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).
| Name | Required | Description | Default |
|---|---|---|---|
| claim | Yes | Natural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full transparency burden. It discloses that the tool is read-only (non-destructive), returns a verdict with citations, and replaces multiple sequential calls. It also notes version limitations (v1 supports only company-financial claims). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise yet comprehensive, front-loaded with the main purpose, and includes examples, return details, and context. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no output schema, no nesting), the description fully covers behavior, constraints, and return format. It explains the verdict types and citation mechanism, making it complete for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'claim' already described in the schema. The description adds value by providing concrete examples ('Apple's FY2024 revenue was $400 billion') and specifying the domain, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: fact-checking or validating natural-language factual claims against authoritative sources. It specifies the domain (company-financial claims for US public companies) and distinguishes itself from sibling tools, none of which perform similar verification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('Use when an agent needs to check whether something a user said is true') and provides example queries. While it doesn't explicitly state when not to use it, the domain constraints are clear, and no direct alternative sibling exists.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!