Skip to main content
Glama

Server Details

Polymarket + HIP-4 + Hyperliquid perps for Claude. 22 tools, signals & arb. Free tier.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
reejakdev/predmcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 23 of 23 tools scored. Lowest: 3/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes (e.g., funding rates vs open interest vs liquidation clusters). However, get_funding_rates and get_top_funding_rates partially overlap, and get_movers/get_volume_spikes are similar. Descriptions help differentiate, but minor ambiguity remains.

Naming Consistency4/5

21 of 23 tools follow a 'get_<noun_phrase>' pattern in snake_case. Two tools use different verbs (create_api_key, search_markets), but the style is consistent. The deviation is minor and understandable given different actions.

Tool Count4/5

23 tools is on the higher side but appropriate for a server covering both Hyperliquid perps and Polymarket prediction markets. The tools cover a broad analysis surface without feeling bloated, though a few could be consolidated.

Completeness4/5

The tool set covers core analysis needs: funding, OI, liquidations, whale data, divergences, and market search. Lacks write operations (e.g., placing bets) and historical data, but these align with the stated analytical purpose. Minor gaps exist.

Available Tools

23 tools
create_api_keyCreate API KeyBInspect

Generate a free PredMCP API key instantly — no email required. Returns the key and ready-to-use MCP config. Call this first if you do not have a key yet. Free tier: 100 calls/day.

ParametersJSON Schema
NameRequiredDescriptionDefault
emailYesYour email address — used to identify your key and for account recovery
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations beyond readOnlyHint=false. Description lacks details on side effects like key overwrite, rate limits per key, or multiple calls. Missing behavioral cues for a mutation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise: three short sentences front-loaded with key info. Efficient but contains a factual error.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-param tool with no output schema, description covers purpose, usage timing, and tier limit. But lacks details on return format, error handling, and effect of repeated calls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description covers the email parameter, but description falsely states 'no email required,' undermining parameter semantics. Schema coverage is 100% but the conflict lowers the score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool generates an API key and returns config, but contradicts schema by claiming 'no email required' while email is required in the schema. This reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this first if you do not have a key yet,' indicating when to use. Free tier limit provides context, but no mention of alternatives or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_outliersGet Funding OutliersA
Read-only
Inspect

Hyperliquid perps whose current funding rate deviates significantly from their 7-day average. A spike vs baseline is a stronger signal than raw rate.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoHistorical window in days to compute the baseline average (default: 7)
min_deviation_factorNoMinimum ratio of |current_rate| / |avg_rate| to qualify as outlier (default: 2x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds behavioral context by explaining the deviation calculation and that a spike vs. baseline is a stronger signal. No contradictions. It doesn't cover edge cases but adds value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero wasted words. Front-loaded with the main action ('Hyperliquid perps whose current funding rate deviates significantly from their 7-day average'). Concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description should explain return format; it only mentions 'perps' without structure or pagination. For a simple tool, it's adequate but incomplete. Complexity is low, so score 3.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with 100% description coverage (days and min_deviation_factor with defaults and ranges). The description mentions the 7-day average but does not add new semantic meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns Hyperliquid perps with funding rates deviating from their 7-day average, distinguishing it from siblings like get_funding_rates (raw rates) and get_top_funding_rates (top rates). It emphasizes that a spike vs. baseline is a stronger signal, clarifying its specific use case.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for detecting outliers but does not explicitly state when to use this tool versus siblings (e.g., get_funding_rates, get_top_funding_rates). No when-not or alternative guidance is provided, leaving some ambiguity for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_funding_ratesGet Funding RatesA
Read-only
Inspect

Current funding rates for Hyperliquid perpetuals. Positive rate = longs pay shorts (bearish bias); negative = shorts pay longs (bullish bias).

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoList of asset tickers to fetch, e.g. ["BTC", "ETH"]. Omit to fetch all available assets.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint and openWorldHint, indicating safe, unfiltered data. Description adds the interpretation of rate polarity (positive vs negative), providing behavioral context beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. First sentence states purpose, second provides interpretation. Efficiently front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with good annotations, the description covers purpose and meaning. However, lacking details on return format (e.g., fields, units) since no output schema. Still adequate for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage for the single optional parameter 'coins', which is well described. Description does not add extra semantics, so baseline score of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves current funding rates for Hyperliquid perpetuals and explains the meaning of positive/negative rates. This distinguishes it from sibling tools like get_top_funding_rates or get_funding_outliers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies use when current rates are needed, but does not explicitly state when not to use or mention alternatives. With many sibling tools (e.g., get_top_funding_rates, get_funding_outliers), guidance would be helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hip4_vs_pm_arbGet HIP-4 vs PM ArbA
Read-only
Inspect

Finds the same underlying market priced on both HIP-4 (on-chain Hyperliquid) and Polymarket, flagging spreads above threshold. A spread means one venue is mispriced relative to the other.

ParametersJSON Schema
NameRequiredDescriptionDefault
min_spread_pctNoMinimum spread between HIP-4 and Polymarket YES prices to flag (percentage points, default: 3)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint and openWorldHint. The description adds that it flags spreads above a threshold and explains what a spread means, providing useful context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, efficient, and front-loaded with the purpose. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema and annotations, the description provides enough context. No output schema exists, but the purpose is clear. Could mention output format, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description mentions 'spreads above threshold' but does not add meaning beyond the parameter's schema, which already has a detailed description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds the same underlying market on HIP-4 and Polymarket and flags spreads above a threshold. It is specific and distinguishes from sibling tools focused on other metrics or venues.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for detecting arbitrage opportunities between HIP-4 and Polymarket. It does not explicitly state when not to use or name alternatives, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_hl_funding_pm_correlationGet HL Funding / PM CorrelationA
Read-only
Inspect

Pairs each Hyperliquid asset (with notable funding) with related Polymarket markets, showing whether funding direction and PM probability are aligned or divergent.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of correlated pairs to return (default: 15)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint and openWorldHint. The description adds context about the correlation logic (alignment/divergence), which is beyond what annotations provide. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One concise sentence with no waste. Front-loaded with the core action and output expectation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter and read-only annotations, the description is complete. It explains the output's nature (aligned or divergent) sufficiently for an agent to know what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'limit' is fully described in the schema (100% coverage). The description does not add extra meaning beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (pairs), resources (Hyperliquid assets and Polymarket markets), and output (showing alignment/divergence). It distinguishes from siblings like get_funding_outliers and get_pm_hl_divergences by focusing on correlation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage (when correlation data is needed) but provides no explicit guidance on when to use this tool versus siblings, nor any exclusion criteria or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_late_game_sportsGet Late Game SportsA
Read-only
Inspect

Sports prediction markets on Polymarket closing within a few hours with a high-certainty leading outcome. Targets near-certain resolution for late-game positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
hours_maxNoMaximum hours until market closes (default: 6h)
certainty_pctNoMinimum leading outcome probability as percentage, e.g. 85 = 85% (default: 85)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and dynamic nature; description adds value by specifying the market type (sports, late-game, high certainty). It does not disclose any additional behavioral traits beyond what annotations provide, but the context is useful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, front-loaded with key purpose. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given annotations and schema, the description fully explains what the tool returns and its filtering criteria. No output schema is needed for this list-based tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with clear parameter descriptions. The tool description does not add meaning beyond the schema, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves sports prediction markets on Polymarket that are closing soon with a high-certainty leading outcome, using specific verbs and resources. It distinguishes itself from siblings like get_markets_near_resolution by focusing on late-game positioning and high certainty.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for finding near-certain, soon-closing sports markets, providing clear context. However, it does not explicitly state when not to use it or mention alternatives among the many sibling tools, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_liquidation_clustersGet Liquidation ClustersA
Read-only
Inspect

Estimated price levels where mass liquidations concentrate for a given Hyperliquid perp, computed from mark price and standard leverage multiples. Higher nearby orderbook liquidity = stronger support/resistance.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesAsset ticker to analyze, e.g. "BTC", "ETH", "SOL"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true. The description adds valuable context on computation and output interpretation (strength based on liquidity), exceeding what annotations alone provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with core function, no redundancy. Every sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single parameter and no output schema, the description adequately explains input and output interpretation. Minor gap: does not specify output format details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description does not add significant new meaning beyond the schema's 'Asset ticker to analyze'. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'get' and specific resource 'liquidation clusters' for a Hyperliquid perp, and differs from sibling tools like get_orderbook or get_funding_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for identifying liquidation concentration levels but does not explicitly state when to use this tool over alternatives or exclude other contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_contextGet Market ContextA
Read-only
Inspect

Unified intelligence snapshot for any topic, asset, or keyword: all matching Polymarket and HIP-4 prediction markets combined with live Hyperliquid perp data (price, funding, OI). One call replaces 3+ separate lookups.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesTopic, asset, or keyword to look up — e.g. "BTC", "Iran", "Fed rate cut", "Trump"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open-world behavior. Description adds useful specifics about data sources (Polymarket, HIP-4, Hyperliquid) and data types (price, funding, OI), without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the key scope and data sources, but lacks an explicit listing of output fields. However, 'snapshot' sufficiently implies a summary, and no output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the only parameter (query) with clear examples. Description adds no extra parameter details, but schema is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a unified intelligence snapshot combining Polymarket, HIP-4, and Hyperliquid data, distinguishing it from sibling tools that focus on individual data points.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'One call replaces 3+ separate lookups,' guiding use when a consolidated view is needed. Does not explicitly state when not to use, but context implies alternatives for granular needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_marketsGet MarketsA
Read-only
Inspect

Live prediction markets from Polymarket and/or HIP-4, sorted by volume. Returns title, YES/NO prices, 24h volume, and expiry.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of markets to return (1–100, default: 20)
activeNoFilter to active/open markets only (default: true)
platformNoData source: "polymarket", "hip4", or "all" (default)all
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and openWorldHint=true. Description adds that results are sorted by volume and lists specific return fields, providing useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff, front-loaded with purpose and key details. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 3 parameters, complete schema descriptions, and a description that lists return fields despite no output schema, the tool is reasonably complete for its complexity. Lacks usage guidance but otherwise sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage; all three parameters are well-documented. Tool description does not add significant new information about parameters beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches live prediction markets from specific sources (Polymarket and/or HIP-4), sorts by volume, and lists returned fields (title, prices, volume, expiry). This differentiates it from siblings like search_markets or get_markets_near_resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like search_markets or get_markets_near_resolution. Lacks explicit when-to-use or when-not-to-use context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_markets_near_resolutionGet Markets Near ResolutionA
Read-only
Inspect

Polymarket markets resolving within the next N hours with a leading probability above threshold. Useful for resolution arbitrage and last-minute positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
hoursNoMaximum hours until resolution (default: 24h, max: 168h = 7 days)
min_probNoMinimum leading outcome probability to include (default: 0.7 = 70%)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. Description adds context about filtering but no additional behavioral traits beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. First sentence describes the tool, second sentence explains its utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Good coverage given low complexity, schema handles parameters, annotations present. Minor omission: no indication of what happens with no results, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. Description does not add meaning beyond the schema, as the schema already describes hours and min_prob with defaults and ranges.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves markets resolving soon with a probability threshold. Distinguishes from siblings like get_markets and get_late_game_sports by focusing on resolution time.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly mentions usefulness for resolution arbitrage and last-minute positioning, providing context. However, it does not explicitly state when not to use it or compare to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_moversGet MoversA
Read-only
Inspect

Top prediction markets ranked by 24h volume spike or biggest YES/NO price swing. Surfaces breaking news bets and momentum plays across Polymarket and HIP-4.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top movers to return (1–20, default: 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open-world hints. The description adds that results are based on 24h volume spike or price swing and across specific platforms, but does not detail update frequency or other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and contains no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with informative annotations, the description adequately explains what is returned and the scope. Missing output format details are partially mitigated by the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the limit parameter. The tool description adds no additional parameter semantics beyond what the schema provides, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it ranks top prediction markets by 24h volume spike or YES/NO price swing, surfacing breaking news bets and momentum plays. It specifies the scope (Polymarket and HIP-4) and distinguishes it from siblings like get_volume_spikes or get_funding_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding trending markets and breaking news bets, but does not explicitly state when not to use it or mention alternative tools for related tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oddsGet OddsA
Read-only
Inspect

Current YES/NO prices and implied probability for any Polymarket or HIP-4 market token.

ParametersJSON Schema
NameRequiredDescriptionDefault
platformYesPlatform the market is on: "polymarket" or "hip4"
identifierYesFor Polymarket: the token_id of the YES or NO outcome. For HIP-4: the base asset ticker (e.g. "BTC")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint, so safety profile is clear. Description adds context that it returns 'current' prices and implied probability, which is valuable behavioral information beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 15 words, front-loaded with key action and resource. No wasted words; every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple two-parameter tool with no output schema, description adequately specifies what it returns (prices and probability) and the platforms it covers. Could mention dynamic nature more explicitly, but still complete for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters, clearly documenting platform enum and identifier types. Description does not add extra parameter meaning beyond what schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'get' and clearly identifies the resource ('YES/NO prices and implied probability') and scope ('any Polymarket or HIP-4 market token'). Clearly distinguishes from sibling tools like get_markets or get_orderbook.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage when needing current odds from Polymarket or HIP-4, but provides no explicit guidance on when not to use it or alternatives among siblings. No exclusions or conditions stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_oi_near_capGet OI Near CapA
Read-only
Inspect

Lists Hyperliquid perps that are currently at the open interest cap — new long positions cannot be opened. Use as a blacklist to avoid getting rejected on entry.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, and the description adds behavioral context by explaining that new long positions cannot be opened on listed perps, which is beyond the annotation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first explains what the tool does, the second provides usage guidance. It is concise, front-loaded, and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the tool's purpose and usage but does not specify output fields. However, for a simple list tool, the context is sufficient for selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters in the input schema, so the description carries no parameter info. However, it adds meaning about the output (lists perps at cap), which compensates for the lack of parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists Hyperliquid perps at the open interest cap, distinguishing it from sibling tools like get_open_interest and get_markets by focusing on those blocked for new long positions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear use case: 'Use as a blacklist to avoid getting rejected on entry.' It implicitly tells when to use it (before opening longs), though it doesn't explicitly mention when not to use it or list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_open_interestGet Open InterestA
Read-only
Inspect

Total open interest in USD and contracts for Hyperliquid perpetuals. Rising OI + rising price = strong trend; rising OI + falling price = short build-up.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinsNoList of asset tickers to fetch, e.g. ["BTC", "SOL"]. Omit to fetch all available assets.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly and openWorld; description adds that it returns OI in USD and contracts, plus trend interpretation. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no filler, purpose first, interpretation second. Optimal length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple read tool with one optional parameter; return format is alluded to but not detailed, though interpretive hints add value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage 100% with clear parameter description; description does not add beyond schema but is sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns total open interest for Hyperliquid perpetuals in USD and contracts, distinguishing it from sibling tools like get_funding_rates or get_volume_spikes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides interpretive guidance for OI trends but no explicit when-to-use or comparison with siblings. Does not mention that it is for Hyperliquid perps specifically.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_orderbookGet OrderbookA
Read-only
Inspect

Full orderbook depth (bids + asks) for any Polymarket market token. Shows liquidity at each price level.

ParametersJSON Schema
NameRequiredDescriptionDefault
token_idYesPolymarket token ID for the YES or NO side of a market
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds 'Shows liquidity at each price level' which is consistent with readOnlyHint and openWorldHint. But it does not provide any behavioral details beyond what annotations already indicate, such as the static nature of data or potential rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, clear sentence that conveys the essential information without unnecessary words. Well-structured and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description adequately explains what the tool does and what to expect. It covers the orderbook depth and liquidity display. Could mention that it returns a list or snapshot, but sufficient for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Parameter token_id is fully described in the input schema. Description adds context that it is for any Polymarket market token and mentions bids+asks, which adds minor value. Schema coverage is 100%, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves full orderbook depth (bids+asks) for a Polymarket market token, directly referencing the specific resource and action. It is distinct from all sibling tools, which are analytical or search-related.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives or when not to use it. However, given the sibling list, this is the only orderbook tool, so the usage context is implicitly clear but could be improved.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pm_hl_divergencesGet PM/HL DivergencesA
Read-only
Inspect

Markets where Polymarket implied probability diverges from Hyperliquid perpetual funding direction — e.g. PM prices bullish outcome but HL funding shows crowded longs (bearish pressure). The hardest signal to compute manually.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of divergences to return (default: 15)
min_pctNoMinimum divergence percentage between PM implied probability and HL pricing to flag (default: 10%)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint, so the description doesn't need to restate safety. It adds context about the signal's difficulty and provides an example, but doesn't detail pagination, rate limits, or result ordering.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose, zero wasted words. Very concise and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with 2 parameters, good annotations, and no output schema, the description explains the concept and provides an example. It could mention the output format or ordering, but is largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both parameters (limit, min_pct) well-described in the schema. Description adds no extra meaning beyond what the schema provides, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool finds markets where PM implied probability diverges from HL funding direction, with a concrete example. It distinguishes from sibling tools like get_funding_outliers and get_hl_funding_pm_correlation by focusing on the divergence between specific metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when interested in PM/HL divergences but provides no explicit guidance on when to use this tool versus alternatives like get_funding_outliers or get_hl_funding_pm_correlation. No when-not-to-use or alternative references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_signalsGet SignalsB
Read-only
Inspect

Detect divergence signals between Hyperliquid perpetual funding/OI sentiment and HIP-4 on-chain prediction market odds. Returns BULLISH/BEARISH/DIVERGENCE signal with reasoning — e.g. perps long-biased while prediction market prices a decline.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesTicker of the asset to analyze, e.g. "BTC", "ETH", "SOL"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint, so the description's burden is lower. The description adds that the tool returns signals with reasoning, which is helpful context, but does not disclose additional behavioral traits like data freshness or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, directly states the core function, and includes an example. Every sentence adds value, and the structure is efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, full schema coverage for the one parameter, and lack of output schema, the description adequately explains what the tool does and what it returns. It is complete enough for an agent to invoke correctly, though missing usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'coin', which has a description. The tool description does not add any further meaning or constraints beyond what the schema already provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects divergence signals between Hyperliquid perpetual funding/OI sentiment and HIP-4 on-chain prediction market odds. It specifies the output includes BULLISH/BEARISH/DIVERGENCE signals with reasoning, providing a clear purpose that distinguishes it from generic tools, though not explicitly differentiating from similar siblings like get_pm_hl_divergences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives such as get_hip4_vs_pm_arb or get_pm_hl_divergences. There is no mention of prerequisites, context, or exclusions, leaving the agent to infer usage independently.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_top_funding_ratesGet Top Funding RatesA
Read-only
Inspect

Top Hyperliquid perps ranked by absolute funding rate, with OI and annualized yield. Useful for finding the most overcrowded longs/shorts and carry opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of top results to return (default: 10)
min_abs_rateNoMinimum absolute funding rate to include, e.g. 0.0001. Omit to include all.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and open-world behavior; the description adds context about output fields (OI, annualized yield) and ranking logic, which goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, followed by use case. No superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Provides enough context for a simple read-only tool with two optional params and no output schema; mention of OI and yield hints at return shape.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters have descriptions in the schema (100% coverage). The tool description does not add additional parameter meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool ranks top perps by absolute funding rate, includes OI and annualized yield, distinguishing it from siblings like get_funding_rates and get_funding_outliers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states it is useful for finding overcrowded longs/shorts and carry opportunities, but does not mention when not to use or name alternative siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_volume_spikesGet Volume SpikesA
Read-only
Inspect

Polymarket markets with abnormal 24h volume vs their 7-day daily average. Volume spikes typically precede news events or informed positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default: 15)
min_ratioNoMinimum ratio of 24h volume vs 7-day daily average to qualify as a spike (default: 3x)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint. Description adds value by explaining the behavioral significance of volume spikes (preceding news events or informed positioning), which goes beyond the annotation cues. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose. No redundant words. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with self-explanatory parameters and no output schema, the description covers the main purpose and typical use. However, it omits details on output format or sorting, which would be helpful given the lack of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not elaborate on parameters (limit, min_ratio) beyond what the schema provides, so no additional value added.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the tool as retrieving Polymarket markets with abnormal 24h volume relative to 7-day average, with a specific verb and resource. Distinguishes from siblings like get_movers or get_signals through the unique metric and use case hint.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description implies detection of news events or informed positioning but doesn't contrast with other market or signal tools. No when-not-to-use or alternative names provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_convergenceGet Whale ConvergenceA
Read-only
Inspect

Detect simultaneous whale activity on both Hyperliquid perps and Polymarket for an asset. Flags convergence events where large perp trades and large prediction market positions align — a leading indicator of informed positioning.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesTicker of the asset to analyze, e.g. "BTC", "ETH"
window_minutesNoLookback window in minutes for whale trade detection (1–60, default: 15)
min_notional_usdcNoMinimum trade size in USDC to qualify as whale activity (default: 100,000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds that it 'Flags convergence events' and calls it a 'leading indicator', but does not disclose additional behavioral traits such as data recency, rate limits, or potential side effects. It does not contradict the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that front-load the primary purpose and add a nuance about leading indicators. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain the return format. It states it 'Flags convergence events' but does not describe what constitutes an event (e.g., list of timestamps, sizes, platforms). The context signals show three parameters and no output schema, so more detail on output would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with clear descriptions for each parameter (coin, window_minutes, min_notional_usdc). The tool description sets context for the parameters (e.g., 'simultaneous', 'leading indicator') but does not add meaningful semantic information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects simultaneous whale activity on Hyperliquid perps and Polymarket, specifically convergence events. It uses a specific verb 'Detect' and resource 'whale activity on both platforms', and the concept of convergence distinguishes it from sibling tools like get_whale_trades or get_whale_positions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for detecting leading indicators of informed positioning but does not explicitly state when to use this tool versus alternatives like get_whale_trades or get_whale_positions. No when-not or exclusion conditions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_positionsGet Whale PositionsA
Read-only
Inspect

Largest current position holders in a Polymarket prediction market. Shows wallet address, position size in USDC, and side (YES/NO).

ParametersJSON Schema
NameRequiredDescriptionDefault
condition_idYesPolymarket condition ID for the market to inspect
min_size_usdcNoMinimum position size in USDC to include in results (default: 1,000)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and openWorldHint=true. Description adds that results are for a specific market and shows wallet, size, side, which is consistent and adds clarity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with all essential information, no filler. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return fields. Lacks mention of pagination or edge cases, but adequate for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters. Description does not add new information beyond what the schema provides, so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves largest position holders in a Polymarket market, specifying output fields (wallet, size, side). This distinguishes it from siblings like get_whale_trades.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus alternatives. While the purpose is clear, it does not mention conditions or exclusions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_whale_tradesGet Whale TradesA
Read-only
Inspect

Recent large trades on Hyperliquid perps above a notional threshold. Includes side (long/short), size, price, and timestamp.

ParametersJSON Schema
NameRequiredDescriptionDefault
coinYesAsset ticker to fetch whale trades for, e.g. "BTC", "ETH"
min_notional_usdcNoMinimum trade size in USDC to qualify as a whale trade (default: 50,000)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context about the return fields and threshold behavior beyond the readOnlyHint annotation, but does not disclose additional traits such as how recent the data is, number of trades returned, or any pagination limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and key content without any unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only fetch tool with no output schema, the description adequately explains what the tool returns, making it complete enough for an agent to understand the output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the description's mention of 'notional threshold' adds minimal value over the schema's description of min_notional_usdc; the baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'Recent large trades on Hyperliquid perps above a notional threshold' and lists the included fields (side, size, price, timestamp), providing a specific verb and resource that distinguishes it from siblings like get_whale_positions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for fetching recent large trades but does not explicitly discuss when to use it over alternatives or mention any prerequisites, providing only implied usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_marketsSearch MarketsA
Read-only
Inspect

Full-text search across all Polymarket and HIP-4 prediction markets. Returns ranked results with current odds.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (1–50, default: 10)
queryYesKeywords to search in market names and descriptions, e.g. "bitcoin ETF", "US election", "Fed pivot"
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and openWorldHint; description adds that results are ranked and include current odds. No contradictions, and the description enriches understanding beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. All information is front-loaded and directly addresses the tool's purpose and output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple parameters, no output schema, and annotations covering safety and data freshness, the description is mostly complete. It could mention pagination or sorting details, but the provided 'ranked results' offers sufficient context for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for both query and limit. The tool description adds no extra meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Full-text search across all Polymarket and HIP-4 prediction markets' with a specific verb (search) and resource (markets), and distinguishes from siblings like get_markets which likely list all markets without search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies use for keyword-based search with 'Returns ranked results with current odds' but doesn't explicitly exclude alternatives (e.g., use get_markets for unfiltered listing). Provides clear context but no direct when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.