MarketMCP
Server Details
Polymarket + HIP-4 + Hyperliquid perps for Claude. 12 tools, cross-platform signals. Free tier.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 23 of 23 tools scored. Lowest: 2.6/5.
Most tools have clearly distinct purposes, with detailed descriptions that differentiate them. However, some overlap exists, e.g., get_funding_rates and get_top_funding_rates, and get_signals vs get_pm_hl_divergences, though descriptions help clarify.
All tools follow a consistent verb_noun pattern (e.g., create_api_key, get_funding_rates) with underscores, making naming predictable and easy to navigate.
At 23 tools, the count is slightly high but justified by the breadth of domains covered (prediction markets, perps, whales, signals). Each tool serves a distinct analytical purpose.
The tool set provides comprehensive coverage for market analysis: listing, searching, odds, orderbooks, funding, OI, liquidations, divergences, and whale tracking. No obvious gaps for the intended analytical use case.
Available Tools
23 toolscreate_api_keyCreate API KeyCInspect
Generate a free PredMCP API key instantly — no email required. Returns the key and ready-to-use MCP config. Call this first if you do not have a key yet. Free tier: 100 calls/day.
| Name | Required | Description | Default |
|---|---|---|---|
| Yes | Your email address — used to identify your key and for account recovery |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool generates a key and returns MCP config, and mentions a daily call limit. However, it fails to clarify idempotency (e.g., behavior if called multiple times with the same email) and contains the contradictory 'no email required' statement, reducing transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short at three sentences and front-loads the core action. However, the inaccuracy in the first sentence reduces efficiency, as the agent must reconcile the contradiction with the schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple creation tool, the description covers purpose, usage trigger, and a basic limit. It omits details about duplicate calls or key regeneration, and the email contradiction introduces ambiguity, leaving it moderately complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides full coverage for the email parameter with format and description. The description adds no meaningful parameter information and instead contains the misleading 'no email required' phrase, which detracts from understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Generate a free PredMCP API key instantly' which is clear, but it also says 'no email required' which directly contradicts the input schema requiring an email. This inconsistency undermines clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this first if you do not have a key yet', providing clear when-to-use guidance. It also mentions the free tier limit, but does not discuss when not to use or alternatives, which are not critical given the distinct sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_funding_outliersGet Funding OutliersARead-onlyInspect
Hyperliquid perps whose current funding rate deviates significantly from their 7-day average. A spike vs baseline is a stronger signal than raw rate.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Historical window in days to compute the baseline average (default: 7) | |
| min_deviation_factor | No | Minimum ratio of |current_rate| / |avg_rate| to qualify as outlier (default: 2x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so safe. Description adds the comparison logic (current vs 7-day average) but no further behavioral details like pagination or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose. Every word earns its place with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and description fails to specify return structure (e.g., which fields per outlier). Lacks guidance on ordering or count, leaving agent uncertain about response format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema provides 100% parameter coverage with descriptions. Description reinforces defaults (7-day) and deviation concept but adds no new semantic layer beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns 'Hyperliquid perps' that are funding rate outliers based on deviation from 7-day average. Distinguishes from siblings like get_funding_rates (raw rates) and get_top_funding_rates (highest rates) by emphasizing deviation signal.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Hints at when to use: 'A spike vs baseline is a stronger signal than raw rate' suggests use for outlier detection. Could explicitly contrast with get_top_funding_rates, but provides clear context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_funding_ratesGet Funding RatesARead-onlyInspect
Current funding rates for Hyperliquid perpetuals. Positive rate = longs pay shorts (bearish bias); negative = shorts pay longs (bullish bias).
| Name | Required | Description | Default |
|---|---|---|---|
| coins | No | List of asset tickers to fetch, e.g. ["BTC", "ETH"]. Omit to fetch all available assets. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds behavioral context by explaining the interpretation of funding rate signs (bearish/bullish bias), which goes beyond the structured annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences that immediately convey the tool's value and interpretation. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with one optional parameter and no output schema, the description fully covers what the tool does and how to interpret results. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single optional parameter 'coins'. The description adds no additional parameter semantics beyond what the schema already provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns current funding rates for Hyperliquid perpetuals and explains the meaning of positive/negative values, distinguishing it from siblings like get_top_funding_rates or get_funding_outliers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While the purpose is clear, the description does not mention contexts where siblings are more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hip4_vs_pm_arbGet HIP-4 vs PM ArbBRead-onlyInspect
Finds the same underlying market priced on both HIP-4 (on-chain Hyperliquid) and Polymarket, flagging spreads above threshold. A spread means one venue is mispriced relative to the other.
| Name | Required | Description | Default |
|---|---|---|---|
| min_spread_pct | No | Minimum spread between HIP-4 and Polymarket YES prices to flag (percentage points, default: 3) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint and openWorldHint. The description adds that it flags spreads above threshold, but doesn't detail output format or behavior. No contradiction, but limited additional behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no unnecessary text. Front-loaded with key action and details. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one parameter) and presence of annotations, the description is mostly sufficient but lacks any detail about output format. This omission may require an agent to infer return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter description in the schema is already clear. The tool description adds no extra meaning beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds the same underlying market on HIP-4 and Polymarket and flags spreads, using a specific verb and resource. However, it does not explicitly distinguish it from sibling tools like get_pm_hl_divergences, leaving some ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., get_pm_hl_divergences). The description does not mention when not to use it or provide context for choosing this over similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_hl_funding_pm_correlationGet HL Funding / PM CorrelationARead-onlyInspect
Pairs each Hyperliquid asset (with notable funding) with related Polymarket markets, showing whether funding direction and PM probability are aligned or divergent.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of correlated pairs to return (default: 15) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint, so the description adds no further behavioral context. It does not mention data freshness, pagination, or what happens when no correlations exist, but the annotations sufficiently cover the read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that immediately conveys the tool's purpose without any fluff. It is well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no output schema, clear annotations), the description provides sufficient context for correct usage. No additional information is necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage for the single parameter 'limit', fully described with default, min, max. The description does not add any additional meaning beyond the schema, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it pairs Hyperliquid assets with Polymarket markets and indicates the output shows alignment or divergence. It distinguishes from sibling tool 'get_pm_hl_divergences' which likely only focuses on divergences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for correlation analysis but does not specify when to use this tool versus alternatives like 'get_pm_hl_divergences' or other sibling tools. No explicit exclusions or prerequisites are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_late_game_sportsGet Late Game SportsARead-onlyInspect
Sports prediction markets on Polymarket closing within a few hours with a high-certainty leading outcome. Targets near-certain resolution for late-game positioning.
| Name | Required | Description | Default |
|---|---|---|---|
| hours_max | No | Maximum hours until market closes (default: 6h) | |
| certainty_pct | No | Minimum leading outcome probability as percentage, e.g. 85 = 85% (default: 85) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, indicating a safe read operation with potentially dynamic results. The description adds no additional behavioral details such as auth requirements or side effects, but since annotations cover the safety profile, a score of 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise, consisting of two sentences that immediately convey the tool's purpose and target use case. Every word earns its place, with no redundancy or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, read-only), the description covers its purpose and filtering criteria well. However, it does not mention what the output contains (e.g., market IDs, probabilities, resolution times), which would be helpful since no output schema is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with both parameters (hours_max, certainty_pct) having clear descriptions including defaults and ranges. The description does not add further parameter-level information, so the baseline score of 3 is maintained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's function: retrieving sports prediction markets on Polymarket that close within a few hours and have a high-certainty leading outcome. The verb 'get' and specific resource 'late game sports' make it distinct from siblings like 'get_markets_near_resolution' which is broader.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states the tool targets near-certain resolution for late-game positioning, providing clear context. However, it does not explicitly mention when to avoid using this tool or suggest alternatives among the many sibling tools, though the specificity to sports and high certainty implicitly guides usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_liquidation_clustersGet Liquidation ClustersARead-onlyInspect
Estimated price levels where mass liquidations concentrate for a given Hyperliquid perp, computed from mark price and standard leverage multiples. Higher nearby orderbook liquidity = stronger support/resistance.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Asset ticker to analyze, e.g. "BTC", "ETH", "SOL" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint, indicating safe behavior. The description adds context about computation from mark price and leverage multiples, and the relationship between orderbook liquidity and support/resistance strength.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the core purpose, and includes a formula and behavioral insight without any wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description explains the input and concept, it does not describe the output format or structure. Given no output schema, the agent might benefit from knowing whether results are a list of price levels or a map.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema provides 100% coverage with a clear description of the 'coin' parameter. The tool description does not add additional meaning beyond the schema's parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool estimates price levels where mass liquidations concentrate for a specific Hyperliquid perp, using mark price and leverage multiples. This differentiates it from sibling tools like get_orderbook or get_funding_rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for identifying support/resistance from liquidation data, and the context of sibling tools makes the purpose distinct. However, there is no explicit guidance on when to use this tool versus alternatives like get_orderbook.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_market_contextGet Market ContextARead-onlyInspect
Unified intelligence snapshot for any topic, asset, or keyword: all matching Polymarket and HIP-4 prediction markets combined with live Hyperliquid perp data (price, funding, OI). One call replaces 3+ separate lookups.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Topic, asset, or keyword to look up — e.g. "BTC", "Iran", "Fed rate cut", "Trump" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and openWorldHint=true, so the description adds value by detailing the data sources combined (Polymarket, HIP-4, Hyperliquid perp data). This provides behavioral context beyond the annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, front-loads the core purpose, and avoids all unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return composition (prediction markets and hyperliquid data with specific fields). It is complete enough for a single-parameter tool, though an explicit note about response format would raise it to 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so baseline is 3. The description does not add extra semantics beyond what the schema already provides for the single 'query' parameter, merely reinforcing its purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a 'unified intelligence snapshot' for any topic, asset, or keyword, combining multiple data sources. It distinguishes itself from sibling tools by noting it replaces 3+ separate lookups, making its purpose very specific and clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'one call replaces 3+ separate lookups', implying it should be used when a broad snapshot is needed. While it doesn't explicitly state when not to use it, the context of sibling tools (e.g., get_funding_rates, get_open_interest) provides clear alternatives for more specific needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_marketsGet MarketsARead-onlyInspect
Live prediction markets from Polymarket and/or HIP-4, sorted by volume. Returns title, YES/NO prices, 24h volume, and expiry.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of markets to return (1–100, default: 20) | |
| active | No | Filter to active/open markets only (default: true) | |
| platform | No | Data source: "polymarket", "hip4", or "all" (default) | all |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint. The description adds value by specifying that markets are 'live', sorted by volume, and which fields are returned. No contradictory or missing behavioral traits beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with purpose, no redundant information. Every word contributes meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (3 optional parameters, no output schema) and good annotations, the description is nearly complete. It could be improved by specifying sorting direction (descending) and mentioning that results are real-time, but it already captures the core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 3 parameters all with descriptions (100% coverage). The description adds context that the results are sorted by volume (a default behavior not in the schema) and specifies the return fields, which assists the agent in understanding parameter impact.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns live prediction markets from specific sources (Polymarket and HIP-4) sorted by volume, and lists the exact return fields (title, YES/NO prices, 24h volume, expiry). This distinguishes it from siblings like search_markets or get_market_context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives such as search_markets or get_market_context. No explicit when-not or context for selection is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_markets_near_resolutionGet Markets Near ResolutionARead-onlyInspect
Polymarket markets resolving within the next N hours with a leading probability above threshold. Useful for resolution arbitrage and last-minute positioning.
| Name | Required | Description | Default |
|---|---|---|---|
| hours | No | Maximum hours until resolution (default: 24h, max: 168h = 7 days) | |
| min_prob | No | Minimum leading outcome probability to include (default: 0.7 = 70%) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and open-world hints. The description adds value by specifying filtering criteria (time and probability threshold), which goes beyond what annotations convey. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two efficient sentences: one declaring functionality, one stating use case. No redundant information, front-loaded with key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple filtered-list tool with good annotations and complete schema, the description is sufficient. It clarifies the selection criteria and use case. Output format is not described but is implicitly a list of markets.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for both parameters. The description adds no new meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's function: retrieving Polymarket markets about to resolve, filtered by time and probability. It uses specific verbs ('resolving', 'filtered') and distinguishes from generic siblings like get_markets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states it is 'useful for resolution arbitrage and last-minute positioning', providing clear context for use. It does not explicitly exclude scenarios, but the purpose is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_moversGet MoversARead-onlyInspect
Top prediction markets ranked by 24h volume spike or biggest YES/NO price swing. Surfaces breaking news bets and momentum plays across Polymarket and HIP-4.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top movers to return (1–20, default: 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds behavioral context by explaining that it surfaces 'breaking news bets and momentum plays' and that markets are ranked by volume spikes or price swings. This goes beyond annotations without contradicting them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no filler. The first sentence clearly states the output and ranking criteria, the second adds the use case and sources. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter with good schema documentation, no output schema, and a straightforward purpose, the description is complete. It explains what 'movers' means and the platforms covered, leaving no ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the description adds no additional meaning to the single parameter 'limit'. The schema already documents its range and default, so the description does not need to compensate. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'top prediction markets ranked by 24h volume spike or biggest YES/NO price swing' and mentions specific sources (Polymarket, HIP-4). This is a specific verb+resource combination that distinguishes it from sibling tools like get_markets or get_volume_spikes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding breaking news bets and momentum plays, which gives clear context. It does not explicitly state when not to use alternatives, but given sibling tools with distinct purposes (e.g., get_markets for listing, get_volume_spikes for volume-only), the purpose is differentiated enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_oddsGet OddsARead-onlyInspect
Current YES/NO prices and implied probability for any Polymarket or HIP-4 market token.
| Name | Required | Description | Default |
|---|---|---|---|
| platform | Yes | Platform the market is on: "polymarket" or "hip4" | |
| identifier | Yes | For Polymarket: the token_id of the YES or NO outcome. For HIP-4: the base asset ticker (e.g. "BTC") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, so the description correctly complements by stating it returns prices and probability. No contradiction, and it adds context about what data is fetched. However, it could mention that no side effects occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, complete sentence that immediately conveys the tool's purpose. No redundant or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two well-described parameters and no output schema, the description sufficiently explains the return value (prices and probability) and the supported platforms. It is complete enough for an AI to understand its function.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already fully describes both parameters with 100% coverage. The description does not add additional meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves current YES/NO prices and implied probability for Polymarket or HIP-4 market tokens. It names the specific platforms and resources, distinguishing it from sibling tools like get_orderbook or get_market_context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving odds from polymorph or HIP-4 but does not provide explicit guidance on when to use this tool versus alternatives like get_orderbook or get_market_context. No when-not-to-use or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_oi_near_capGet OI Near CapARead-onlyInspect
Lists Hyperliquid perps that are currently at the open interest cap — new long positions cannot be opened. Use as a blacklist to avoid getting rejected on entry.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readonly and open-world behavior. The description adds that the listing implies positions cannot be opened, providing functional context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise, front-loaded sentences: first states what it does, second explains usage. No unnecessary words, earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with no output schema. The description covers purpose and usage, though it does not specify return format (likely list of perp names). However, given the blacklist use case, the information is sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description adds no parameter info. With 0 parameters, baseline is 4 per rules, and no additional explanation is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists Hyperliquid perps at open interest cap, specifying that new long positions cannot be opened. It distinguishes itself from siblings like 'get_open_interest' by focusing on capacity limits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly instructs to use as a blacklist before entering long positions to avoid rejection. It does not discuss when not to use or alternatives, but the guidance is direct and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_open_interestGet Open InterestARead-onlyInspect
Total open interest in USD and contracts for Hyperliquid perpetuals. Rising OI + rising price = strong trend; rising OI + falling price = short build-up.
| Name | Required | Description | Default |
|---|---|---|---|
| coins | No | List of asset tickers to fetch, e.g. ["BTC", "SOL"]. Omit to fetch all available assets. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true. The description adds interpretive context but does not disclose additional behavioral traits like data freshness, rate limits, or response structure beyond USD and contracts.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core purpose, no unnecessary words. The second sentence adds valuable interpretive guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one well-documented parameter and no output schema, the description covers the purpose and interpretation. Minor omission: not specifying return format per coin vs aggregate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'coins', with a clear description. The tool description does not add further meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns total open interest in USD and contracts for Hyperliquid perpetuals, and adds interpretive guidance. However, it does not explicitly differentiate from sibling tools like get_oi_near_cap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining open interest data and interpreting trends, but provides no explicit guidance on when to use versus alternatives or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_orderbookGet OrderbookARead-onlyInspect
Full orderbook depth (bids + asks) for any Polymarket market token. Shows liquidity at each price level.
| Name | Required | Description | Default |
|---|---|---|---|
| token_id | Yes | Polymarket token ID for the YES or NO side of a market |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, indicating safe, read-only behavior. The description adds that it shows liquidity at each price level, which is consistent. No contradictions, but no additional behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences, 17 words, front-loaded with the key purpose ('Full orderbook depth'). Every word adds value with no unnecessary fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema) and sufficient annotations, the description fully covers what the tool does and what it returns. No additional information is needed for proper usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers 100% of the single parameter (token_id) with a description. The tool description adds minimal extra context ('for any Polymarket market token') that slightly reinforces but does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the full orderbook depth (bids and asks) for any Polymarket market token and shows liquidity at each price level. It uses a specific verb ('get') and resource ('orderbook'), and distinguishes from sibling tools which focus on different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving orderbook data but does not provide explicit guidance on when to use this tool versus alternatives or mention exclusions. The context is clear enough for a standard data retrieval tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pm_hl_divergencesGet PM/HL DivergencesARead-onlyInspect
Markets where Polymarket implied probability diverges from Hyperliquid perpetual funding direction — e.g. PM prices bullish outcome but HL funding shows crowded longs (bearish pressure). The hardest signal to compute manually.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of divergences to return (default: 15) | |
| min_pct | No | Minimum divergence percentage between PM implied probability and HL pricing to flag (default: 10%) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint. The description adds that the signal is 'hardest to compute manually', which is a claim of complexity but not a behavioral trait. No additional disclosure about rate limits, data freshness, or other behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy. The core purpose is stated first, followed by an example and a value statement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool that computes cross-platform divergences, the description captures the essence. The lack of output schema is acceptable given openWorldHint. It could optionally describe the return format, but the current level is sufficient for most agents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for limit and min_pct. The description reinforces the min_pct concept but adds no new semantics beyond what the schema provides. Baseline 3 is appropriate given full schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns markets where Polymarket implied probability diverges from Hyperliquid funding direction, with a concrete example. It distinguishes itself from sibling tools like get_funding_rates and get_funding_outliers by focusing on divergences rather than raw rates or outliers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for identifying divergences but does not explicitly state when to use this tool over alternatives. No exclusions or when-not-to-use guidance is provided, leaving the agent to infer context from the description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_signalsGet SignalsARead-onlyInspect
Detect divergence signals between Hyperliquid perpetual funding/OI sentiment and HIP-4 on-chain prediction market odds. Returns BULLISH/BEARISH/DIVERGENCE signal with reasoning — e.g. perps long-biased while prediction market prices a decline.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Ticker of the asset to analyze, e.g. "BTC", "ETH", "SOL" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations (readOnlyHint, openWorldHint) indicate safe read and dynamic data. The description adds that the tool returns signals with reasoning but does not disclose additional behavioral traits like rate limits, caching, or error conditions. It provides moderate value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, directly stating purpose and output. Every word is necessary; no fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single parameter and no output schema, the description explains the return value (signal type and reasoning). Missing are examples, error cases, or what happens when no signal is detected. Still reasonably complete for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'coin' parameter, and the description only adds ticker examples already implied by the schema. The baseline is 3, and no additional parameter context is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool detects divergence signals between Hyperliquid perpetual funding/OI sentiment and HIP-4 prediction market odds, and specifies the return types (BULLISH/BEARISH/DIVERGENCE) with reasoning. This differentiates it from sibling tools like get_pm_hl_divergences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining divergence signals but does not explicitly state when to use this tool versus related siblings such as get_pm_hl_divergences or get_hip4_vs_pm_arb. No exclusions or alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_top_funding_ratesGet Top Funding RatesARead-onlyInspect
Top Hyperliquid perps ranked by absolute funding rate, with OI and annualized yield. Useful for finding the most overcrowded longs/shorts and carry opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of top results to return (default: 10) | |
| min_abs_rate | No | Minimum absolute funding rate to include, e.g. 0.0001. Omit to include all. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds value beyond annotations by specifying return fields (OI, annualized yield) and ranking logic. Annotations already declare readOnlyHint and openWorldHint, so no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, front-loaded with purpose and key attributes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with good annotations and no output schema, the description covers purpose, output fields, and use cases comprehensively. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for limit and min_abs_rate. Description does not add significant new meaning beyond schema, but it contextualizes parameters (e.g., limit controls number of top results).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns top Hyperliquid perps ranked by absolute funding rate, with OI and annualized yield. Explicitly distinguishes from siblings like get_funding_rates by focusing on top and absolute ranking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides use cases: finding overcrowded longs/shorts and carry opportunities. Does not explicitly state when not to use or alternatives, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_volume_spikesGet Volume SpikesARead-onlyInspect
Polymarket markets with abnormal 24h volume vs their 7-day daily average. Volume spikes typically precede news events or informed positioning.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default: 15) | |
| min_ratio | No | Minimum ratio of 24h volume vs 7-day daily average to qualify as a spike (default: 3x) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint, indicating safe, mutable data. The description adds value by explaining that results indicate abnormal volume and its typical significance, enhancing behavioral understanding beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two sentences, with the core action front-loaded. Every sentence serves a purpose, with no fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With only two parameters and schema fully documented, the description sufficiently explains the concept of the output. However, since no output schema exists, lacking details on return fields (e.g., market identifiers, volume values) reduces completeness slightly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are fully described in the schema. The description does not add additional meaning or usage details for `limit` or `min_ratio`, sticking to the baseline of schema sufficiency.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool's function: returning Polymarket markets with abnormal 24h volume against their 7-day average. The verb 'get' and resource 'volume spikes' are specific and distinct from sibling tools, making purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for use by noting that volume spikes 'typically precede news events or informed positioning,' implying the tool is for detecting potential news-driven activity. However, it does not explicitly state when not to use it or compare to alternatives, limiting guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_whale_convergenceGet Whale ConvergenceARead-onlyInspect
Detect simultaneous whale activity on both Hyperliquid perps and Polymarket for an asset. Flags convergence events where large perp trades and large prediction market positions align — a leading indicator of informed positioning.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Ticker of the asset to analyze, e.g. "BTC", "ETH" | |
| window_minutes | No | Lookback window in minutes for whale trade detection (1–60, default: 15) | |
| min_notional_usdc | No | Minimum trade size in USDC to qualify as whale activity (default: 100,000) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint and openWorldHint. The description adds value by explaining that the tool detects simultaneous activity across two platforms and characterizes the output as a leading indicator, which is not evident from annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two well-structured sentences. The first sentence states the core function, and the second elaborates on its significance. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description effectively conveys the tool's purpose and value but does not specify the output format or data structure. Since no output schema exists, the agent is left to infer what a convergence event looks like. Minor gap for a detection tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for all three parameters. The description does not add additional parameter-level information beyond what the schema provides, so baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs 'Detect' and 'Flags' and clearly identifies the resource as simultaneous whale activity on Hyperliquid perps and Polymarket. It distinguishes from sibling tools like get_whale_trades and get_whale_positions by focusing on convergence events as a leading indicator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating the tool flags convergence events as a leading indicator of informed positioning. However, it does not explicitly contrast with alternatives or specify when not to use other tools, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_whale_positionsGet Whale PositionsARead-onlyInspect
Largest current position holders in a Polymarket prediction market. Shows wallet address, position size in USDC, and side (YES/NO).
| Name | Required | Description | Default |
|---|---|---|---|
| condition_id | Yes | Polymarket condition ID for the market to inspect | |
| min_size_usdc | No | Minimum position size in USDC to include in results (default: 1,000) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true, so the description's statement that it 'shows' data is consistent. The description adds the context that results are current and include specific fields, but does not disclose potential behavioral traits like sorting order, pagination, or whether the data is a snapshot. Given that annotations already cover safety, this adds some but not rich context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two sentences that efficiently convey the core purpose and output details. It is front-loaded with the main action and contains no fluff or redundant information, earning a top score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only two parameters, no output schema, and simple annotations, the description covers the essential information: what the tool does, its input parameter (condition_id), and the output fields. It could mention ordering (implied by 'largest') but is largely complete for a simple list tool. A small gap exists in specifying that results are sorted by size descending.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with both parameters (condition_id and min_size_usdc) clearly explained. The description adds no additional semantic meaning beyond what the schema provides. Per guidelines, with high schema coverage, baseline score is 3, which is appropriate here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the largest position holders in a Polymarket prediction market, specifying the output fields (wallet address, position size in USDC, side YES/NO). This distinctively differentiates it from sibling tools like get_whale_trades (trade history) and get_whale_convergence (whale convergence behavior).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives, such as get_whale_trades or get_whale_convergence. It lacks explicit 'when to use' or 'when not to use' context, leaving the agent to infer usage solely from the tool name and purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_whale_tradesGet Whale TradesARead-onlyInspect
Recent large trades on Hyperliquid perps above a notional threshold. Includes side (long/short), size, price, and timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Asset ticker to fetch whale trades for, e.g. "BTC", "ETH" | |
| min_notional_usdc | No | Minimum trade size in USDC to qualify as a whale trade (default: 50,000) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and openWorldHint=true, so the description adds no behavioral surprises. No destructive or side effects mentioned, consistent with annotations. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place: first defines the tool's purpose, second lists returned fields. No redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no output schema, description covers key return fields (side, size, price, timestamp). Missing ordering or limit details, but adequate for most use cases given sibling tool context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters (coin, min_notional_usdc). Description mentions 'above a notional threshold' aligning with min_notional_usdc but adds no additional semantic value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Recent large trades on Hyperliquid perps above a notional threshold' with specific fields (side, size, price, timestamp). This distinguishes it from siblings like get_whale_positions and get_whale_convergence by focusing on trades with a threshold.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for large trades above a threshold but does not explicitly state when to use versus alternatives (e.g., get_whale_positions for positions). No when-not-to-use or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_marketsSearch MarketsARead-onlyInspect
Full-text search across all Polymarket and HIP-4 prediction markets. Returns ranked results with current odds.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return (1–50, default: 10) | |
| query | Yes | Keywords to search in market names and descriptions, e.g. "bitcoin ETF", "US election", "Fed pivot" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and openWorldHint. The description adds that results are 'ranked with current odds', but does not disclose further behavioral details like ranking criteria, pagination, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that is front-loaded with the core purpose, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose and return value, but with no output schema, it could be more complete by explaining ranking details, pagination behavior, or how results are ordered. It's adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions. The description adds an example query but no significant additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Full-text search across all Polymarket and HIP-4 prediction markets' with a specific verb and resource, and distinguishes from siblings like get_markets by focusing on search and ranking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for keyword-based search but does not provide explicit when-to-use or when-not-to-use guidance, nor does it mention alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!